post

Microsoft at MWC Barcelona: Introducing Microsoft HoloLens 2

This evening at a press event to kickoff MWC Barcelona, I had the pleasure of joining CEO Satya Nadella and Technical Fellow Alex Kipman onstage to talk in depth about Microsoft’s worldview for the intelligent cloud and intelligent edge.

As part of today’s press event, we also introduced the world to HoloLens 2.

YouTube Video

This is a tremendously exciting time for Microsoft, our partners, our customers, the computing industry and indeed the world. The virtually limitless computing power and capability of the cloud combined with increasingly intelligent and perceptive edge devices embedded throughout the physical world create experiences we could only imagine a few short years ago.

When intelligent cloud and intelligent edge experiences are infused with mixed reality, we have a framework for achieving amazing things and empowering even more people.

Today represents an important milestone for Microsoft. This moment captures the very best efforts and passion of numerous teams spanning Azure, HoloLens, Dynamics 365 and Microsoft Devices — this truly is a moment where the sum is greater than the parts. From cutting-edge hardware design to mixed reality-infused cloud services, today’s announcements represent the collective work of many teams. And none of this would be possible without our passionate community of customers, partners and developers.

On behalf of everyone on the team, it is my privilege to introduce you to HoloLens 2 and all the announcements we made today to kick off MWC Barcelona.

Introducing HoloLens 2

Side view of sleek black HoloLens 2

Since the release of HoloLens in 2016 we have seen mixed reality transform the way work gets done. We have unlocked super-powers for hundreds of thousands of people who go to work every day. From construction sites to factory floors, from operating rooms to classrooms, HoloLens is changing how we work, learn, communicate and get things done.

We are entering a new era of computing, one in which the digital world goes beyond two-dimensional screens and enters the three-dimensional world. This new collaborative computing era will empower us all to achieve more, break boundaries and work together with greater ease and immediacy in 3D.

Today, we are proud to introduce the world to Microsoft HoloLens 2.

Our customers asked us to focus on three key areas to make HoloLens even better. They wanted HoloLens 2 to be even more immersive and more comfortable, and to accelerate the time-to-value.

Immersion is greatly enhanced by advancements across the board, including in the visual display system, making holograms even more vibrant and realistic. We have more than doubled the field of view in HoloLens 2, while maintaining the industry-leading holographic density of 47 pixels per degree of sight. HoloLens 2 contains a new display system that enables us to achieve these significant advances in performance at low power. We have also completely refreshed the way you interact with holograms in HoloLens 2. Taking advantage of our new time-of-flight depth sensor, combined with built-in AI and semantic understanding, HoloLens 2 enables direct manipulation of holograms with the same instinctual interactions you’d use with physical objects in the real world. In addition to the improvements in the display engine and direct manipulation of holograms, HoloLens 2 contains eye-tracking sensors that make interacting with holograms even more natural. You can log in with Windows Hello enterprise-grade authentication through iris recognition, making it easy for multiple people to quickly and securely share the device.

Comfort is enhanced by a more balanced center of gravity, the use of light carbon-fiber material and a new mechanism for donning the device without readjusting. We’ve improved the thermal management with new vapor chamber technology and accounted for the wide physiological variability in the size and shape of human heads by designing HoloLens 2 to comfortably adjust and fit almost anyone. The new dial-in fit system makes it comfortable to wear for hours on end, and you can keep your glasses on because HoloLens 2 adapts to you by sliding right over them. When it’s time to step out of mixed reality, flip the visor up and switch tasks in seconds. Together, these enhancements have more than tripled the measured comfort and ergonomics of the device.

Time-to-value is accelerated by Microsoft mixed reality applications like Dynamics 365 Remote Assist, Dynamics 365 Layout and the new Dynamics 365 Guides applications. In addition to the in-box value, our ecosystem of mixed reality partners provides a broad range of offerings built on HoloLens that deliver value across a range of industries and use cases. This partner ecosystem is being supplemented by a new wave of mixed reality entrepreneurs who are realizing the potential of devices like HoloLens 2 and the Azure services that give them the spatial, speech and vision intelligence needed for mixed reality, plus battle-tested cloud services for storage, security and application insights.

Building on the unique capabilities of the original HoloLens, HoloLens 2 is the ultimate intelligent edge device. And when coupled with existing and new Azure services, HoloLens 2 becomes even more capable, right out of the box.

HoloLens 2 will be available this year at a price of $3,500. Bundles including Dynamics 365 Remote Assist start at $125/month. HoloLens 2 will be initially available in the United States, Japan, China, Germany, Canada, United Kingdom, Ireland, France, Australia and New Zealand. Customers can preorder HoloLens 2 starting today at https://www.microsoft.com/en-us/hololens/buy.

In addition to HoloLens 2, we were also excited to make the following announcements at MWC Barcelona.

 Azure Kinect Developer Kit (DK)

Front and side view of compact silver Azure Kinect DK device

The Azure Kinect DK is a developer kit that combines our industry-leading AI sensors in a single device. At its core is the time-of-flight depth sensor we developed for HoloLens 2, high-def RGB camera and a 7-microphone circular array that will enable development of advanced computer vision and speech solutions with Azure. It enables solutions that don’t just sense but understand the world — people, places, things around it. A good example of such a solution in the healthcare space is Ocuvera, which is using this technology to prevent patients from falling in hospitals. Every year in the U.S. alone, over 1 million hospital patients fall each year, and 11,000 of those falls are fatal. With Azure Kinect, the environmental precursors to a fall can be determined and a nurse notified to get to patients before they fall. Initially available in the U.S. and China, the Azure Kinect DK is available for preorder today at $399. Visit Azure.com/Kinect for more info.

Dynamics 365 Guides

When we announced Dynamics 365 Remote Assist and Dynamics 365 Layout on October 1, we talked about them as the “first” of our mixed reality applications for HoloLens.

Today, we are proud to announce our latest offering: Microsoft Dynamics 365 Guides.

Dynamics 365 Guides is a new mixed reality app that empowers employees to learn by doing. Guides enhances learning with step-by-step instructions that guide employees to the tools and parts they need and how to use them in real work situations. In addition to the experience of using Guides on HoloLens, a Guides PC app makes it easy to create interactive content, attach photos and videos, import 3D models and customize training to turn institutional knowledge into a repeatable learning tool.

This application will help minimize downtime and increase efficiency for mission-critical equipment and processes and becomes the third Dynamics 365 application that will work on both the previous generation of HoloLens and the new HoloLens 2.

Dynamics 365 Guides is available in preview starting today.

Man wearing HoloLens 2 consults a hologram of a guide as he works on machinery

Azure Mixed Reality Services

Today we also announced two new Azure mixed reality services. These services are designed to help every developer and every business build cross-platform, contextual and enterprise-grade mixed reality applications.

 Azure Spatial Anchors enables businesses and developers to create mixed reality apps that map, designate and recall precise points of interest that are accessible across HoloLens, iOS and Android devices. These precise points of interest enable a range of scenarios, from shared mixed reality experiences to wayfinding across connected places. We’re already seeing this service help our customers work and learn with greater speed and ease in manufacturing, architecture, medical education and more.

Azure Remote Rendering helps people experience 3D without compromise to fuel better, faster decisions. Today, to interact with high-quality 3D models on mobile devices and mixed reality headsets, you often need to “decimate,” or simplify, 3D models to run on target hardware. But in scenarios like design reviews and medical planning, every detail matters, and simplifying assets can result in a loss of important detail that is needed for key decisions. This service will render high-quality 3D content in the cloud and stream it to edge devices, all in real time, with every detail intact.

Azure Spatial Anchors is in public preview as of today. Azure Remote Rendering is now in private preview in advance of its public preview.

Microsoft HoloLens Customization Program

HoloLens is being used in a variety of challenging environments, from construction sites and operating rooms to the International Space Station. HoloLens has passed the basic impact tests from several protective eyewear standards used in North America and Europe. It has been tested and found to conform to the basic impact protection requirements of ANSI Z87.1, CSA Z94.3 and EN 166. With HoloLens 2 we’re introducing the Microsoft HoloLens Customization Program to enable customers and partners to customize HoloLens 2 to fit their environmental needs.

The first to take advantage of the HoloLens Customization Program is our long-standing HoloLens partner Trimble, which last year announced Trimble Connect for HoloLens along with a new hard hat solution that improves the utility of mixed reality for practical field applications. Today it announced the Trimble XR10 with Microsoft HoloLens 2, a new wearable hard hat device that enables workers in safety-controlled environments to access holographic information on the worksite.

Hard hat incorporates HoloLens 2

Open principles

Finally, as we closed things out, Alex Kipman articulated a set of principles around our open approach with the mixed reality ecosystem.

We believe that for an ecosystem to truly thrive there should be no barriers to innovation or customer choice.

To that end, Alex described how HoloLens embraces the principles of open stores, open browsers and open developer platforms.

To illustrate our dedication to these principles, we announced that our friends at Mozilla are bringing a prototype of the Firefox Reality browser to HoloLens 2, demonstrating our commitment to openness and the immersive web. Alex was also joined by Tim Sweeney, founder and CEO of Epic Games, who announced that Unreal Engine 4 support is coming to HoloLens.

In the coming months we will have more announcements and details to share. We look forward to continuing this journey with you all.

Julia

Microsoft unveils details of London flagship store

This first physical retail store for Microsoft in the U.K. will open to the general public on July 11, joining other world-class Microsoft Store locations all over the world, including flagships in New York and Sydney and stores across the U.S., Canada and Puerto Rico and online in more than 190 countries.

This store is the latest step in our almost 40-year investment in the U.K., including recently doubling the size of Microsoft’s Azure regions to help more organizations digitally transform. We’re also committed to supporting the growth of digital skills in the U.K. in partnership with computing education and youth communities.

The flagship Microsoft Store in London will be located on Oxford Circus and covers 21,932 square feet over three floors. It will feature an Answer Desk, offering a dedicated area for customers to get tech support, trainings, repairs and advice from trusted advisors on Microsoft products and services, no matter where the device was purchased, the brand or operating system. A community theater, a space for tech, coding and STEM learning, will run free, year-round workshops and programs for customers. So if you are a business owner looking for the latest tech to grow your business, a gamer who wants to join a community or show your skills in a tournament, a student wanting to brush up on coding or a teacher looking to bring Minecraft alive in the classroom, customers of all ages and abilities will be able to learn and develop their digital skills.

Those who work, live, shop in or visit the U.K. will also be able to test and experience the latest technology, products and services from Microsoft and its partners. Interactive zones, surrounded by immersive video walls running throughout the store will make this the best place to get hands-on with Surface, Windows, Office, Xbox and PC gaming, HoloLens mixed-reality and more.

The site also adds to the growing list of innovative facilities Microsoft runs in the region, including three gaming studios, the start-up hub Reactor London and the global center of excellence for the development of artificial intelligence and other computing disciplines in the Microsoft Research Lab in Cambridge.

Designed to build connections with the local community, customers and businesses, this store represents a unique way to deliver on our mission to empower every person and organization on the planet to achieve more. A flagship store in London has long been part of our vision for our physical and digital store presence, and this opening represents another step in our journey to meet our customers – from consumers to businesses – wherever they are and deepen our connection with them. London is one of the world’s most exciting shopping destinations, and we look forward to empowering customers to explore all that is possible with Microsoft.

For more information and to keep up to date with the latest information on the U.K. store opening, please follow us @MicrosoftStoreUK on Instagram and Facebook and visit us online at www.microsoft.com.

Tags: , , ,

post

Web and Azure Tool Updates in Visual Studio 2019

Angelos Petropoulos

Angelos

Hopefully by now you’ve seen that Visual Studio 2019 is now generally available. As you would expect, we’ve added improvements for web and Azure development. As a starting point, Visual Studio 2019 comes with a new experience for getting started with your code and we updated the experience for creating ASP.NET and ASP.NET Core projects to match:

If you are publishing your application to Azure, you can now configure Azure App Service to use Azure Storage and Azure SQL Database instances, right from the publish profile summary page, without leaving Visual Studio. This means that for any existing web application running in App Service, you can add SQL and Storage, it is no longer limited to creation time only.

By clicking the “Add” button you get to select between Azure Storage and Azure SQL Database (more Azure services to be supported in the future):

and then you get to choose between using an existing instance of Azure Storage that you provisioned in the past or provisioning a new one right then and there:

When you configure your Azure App Service through the publish profile as demonstrated above, Visual Studio will update the Azure App Service application settings to include the connection strings you have configured (e.g. in this case azgist). It will also apply hidden tags to the instances in Azure about how they are configured to work together so that this information is not lost and can be re-discovered later by other instances of Visual Studio.

For a 30 minute overview of developing with Azure in Visual Studio, check out the session we gave as part of the launch:

As always, we welcome your feedback. Tell us what you like and what you don’t like, tell us which features you are missing and which parts of the workflow work or don’t work for you. You can do this by submitting issues to Developer Community or contacting us via Twitter.

Angelos Petropoulos

post

The future of manufacturing is open

With the expansion of IoT across all industries data is becoming the currency of innovation. Organizations have both an opportunity and a business imperative to adopt technologies quickly, build digital competencies, and offer new value-added services that will serve their broader ecosystem.

Manufacturing is an industry where IoT is having a transformational impact, yet which also requires many companies to come together for IoT to be effective. We see several challenges that slow down innovation in manufacturing, such as proprietary data structures from legacy industrial assets and closed industrial solutions. These closed structures foster data silos and limit productivity, hindering production and profitability. It takes more than new software to drive transformation—it takes a new approach to open standards, an ecosystem mindset, the ability to break out of the “walled garden” for data as well as new technology.

This is why Microsoft has invested heavily in making Azure work seamlessly with OPC UA. In fact, we are the leading contributor of open source software to the OPC Foundation. To further this open platform approach, we have collaborated with world-leading manufacturers to accelerate innovation in industrial IoT to shorten time to value. But we feel we need to do more, not just directly between Microsoft and our partners but across the industry and between the partners themselves. It’s not about what any one company can deliver within their operations – it’s about what they can share with others across the sector to help everyone achieve at new levels. It’s clearly a much bigger task than any one organization can take on, and today, I’m pleased to share more about the investments we are making to advance innovation in the manufacturing space by enabling open platforms.

Announcing the Open Manufacturing Platform

Today at Hannover Messe 2019, we are launching the Open Manufacturing Platform (OMP) together with the BMW Group, our partner on this initiative. Built on the Microsoft Azure Industrial IoT cloud platform, the OMP will provide a reference architecture and open data model framework for community members who will both contribute to and learn from others around industrial IoT projects. We’ve set up an initial approach and are actively working to bring new community members on board. BMW has an initial use case focused on their IoT platform, built on Microsoft Azure, in the second generation of autonomous transport systems in one of their sites, greatly simplifying their logistics processes and creating greater efficiency. More information about this and the partnership can be found here.

The OMP provides a single open platform architecture that liberates data from legacy industrial assets, standardizes data models for more efficient data correlation, and most importantly, enables manufacturers to share their data with ecosystem partners in a controlled and secure way, allowing others to benefit from their insights. With pre-built industrial use cases and reference designs, community members will work together to address common industrial challenges while maintaining ownership over their own data. Our news release, shared jointly with the BMW Group this morning, can be found here.

A rising tide that lifts all boats

The recognition of the need for an open approach is taking hold across the industry, as evidenced by SAP’s announcement today of the Open Industry 4.0 Alliance. This alliance – focused on factories, plants and warehouses – between SAP and a number of European manufacturing leaders will help create an open ecosystem for the operation of highly automated factories.

OMP and the Open Industry 4.0 Alliance are complementary visions. Both recognize the need for an open platform for the cloud and intelligent edge on the ground in the factory. Both highlight an open data model and standards-based data exchange mechanisms that allow for cross-company collaboration.

We’ve been working closely with SAP on efforts like the Open Data Initiative and across the industry on a wide range of initiatives including the Industrial Internet Consortium, the Plattform Industrie 4.0 and the OPC Foundation. We look forward to continuing this fruitful partnership and working to align OMP and the Open Industry 4.0 Alliance. Collaboration is the lifeblood of future manufacturing and the more we work together, the more we can accomplish.

Read more here.

post

Transforming manufacturing with intelligent business applications

Manufacturing has been a driving force for industrial and societal transformation for centuries. Next week, at Hannover Messe—the world’s leading trade show for industrial technology—decades of industrial technology innovation at Microsoft is intersecting with Industry 4.0. As Industry 4.0 ushers in new technological advances to improve operations, competition and customer demands are keeping pace. Customers expect exceptional products and services, without exception, driving a need for greater innovation.

We are joined at the event by dozens of global manufacturers representing industries from automotive to consumer electronics and construction equipment, all using Microsoft Business Applications as a competitive differentiator; intelligent technologies that help transform the entire connected manufacturing ecosystem.

Optimize manufacturing operations and deliver new services

At Hannover Messe, we are showcasing how we empower manufacturers to connect Internet of Things (IoT) sensors on key business-critical assets to business transactions in Dynamics 365 for Finance and Operations. It’s a transformative solution to manage production and stock in real-time and proactively resolve issues, optimize manufacturing operations, maximize the value of assets, and take business performance and customer satisfaction to new heights. Read more about the solution here.

Annata, a leading Microsoft Partner for automotive, heavy equipment, and industrial machinery industries, helped Iceland’s largest vehicle importer and distributor, Brimborg, rapidly expand into the commercial fleet rental market in response to the 2008-11 Icelandic financial crisis. Booth visitors will learn how Annata first unified Brimborg’s business of importing, distributing, selling and servicing cars and construction equipment, with a solution that could cover their entire business with Dynamics 365 for Finance and Operations along with the Annata 365 solution.

When the crises hit, Brimborg deployed the extensive Rental module that enabled them to manage this new part of their business. Brimborg is using IoT data flowing from its rental cars with the service scheduling capabilities of the Annata solution to optimize and ensure timely servicing of the fleet. All rental cars are automatically called in for service and inspections, which is highly important for keeping the fleet healthy. Brimborg is continuing to automate and innovate using IoT, Dynamics 365, and the Annata solution to run its business more efficiently, but also to open new business models and opportunities with customers in other areas of its business. Targeting four-minute car deliveries and 30-second customer returns—and planning new innovative business models based on Microsoft and Annata solutions.

Make the leap to intelligent, connected field service

Attendees at Hannover Messe are getting a glimpse into the future of field service for manufacturers, powered by Microsoft IoT Central, Dynamics 365 for Field Service, and Dynamics 365 Mixed Reality for HoloLens and mobile devices.

Dynamics 365 Connected Field Service transforms field service organizations with solutions to detect and resolve issues remotely before the customer knows of an issue, and drive efficiency and cut costs in the business cost center. The solution leverages connected devices and machine learning capabilities to combine remote monitoring, digital services, and predictive maintenance.

As recently announced, Dynamics 365 Remote Assist for mobile devices brings the HoloLens experience to mobile devices, allowing floor operators and field technicians to collaborate with remote experts and troubleshoot issues in context, without leaving the job site.  Dynamics 365 Guides is a new mixed reality tool that allows employees to learn by doing, with interactive, step-by-step guidance presented on heads-up, hands-free displays in real work situations.

Dynamics 365 Connected Field Service.

Toyota’s North America Production Engineering team uses Dynamics 365 Layout and Dynamics 365 Remote Assist on Microsoft HoloLens in their North America manufacturing centers. Dynamics 365 Layout can improve business processes in valuable ways, such as the layout of digital twins of equipment on the manufacturing floor for safety and process verification, and creating innovative AR training practices through the use of holographic equipment instead of physical equipment in space. Dynamics 365 Remote Assist is in the early phases of being used to improve safety and reduce costs through the use of remote experts for equipment verification and incident response.

At Hannover Messe, Microsoft Partner Hitachi Solutions will showcase how they are helping global organizations create outcome-driven, connected field services with IoT and mixed reality to support maintenance work by field workers to increase uptime and service continuity.

Empower Your Changing Workforce

Start a conversation about Industry 4.0, and inevitably the topic lands on the workforce problem.

As a generation of highly-skilled, specialized workers eye retirement, manufacturers are in a bind to recruit young engineers and operators with the right skillset while adjusting to a new workforce, challenging traditional work culture.

Dynamics 365 for Talent helps HR teams at manufacturing organizations to solve the skills gap, offering intelligent tools to find, attract, and onboard skilled candidates. By automating many manual, time-consuming HR processes, Dynamics 365 for Talent lets HR professionals spend less time on the mundane, and more time on strategic initiatives that grow the business.

New this month, Dynamics 365 Guides allows employees to learn by doing with step-by-step instructions that guide employees to the tools and parts they need and how to use them in real work situations. Guides represents a new way to improve workflow efficiency. Now employees can learn while staying hands-on with their work. With the accompanying PC app, it’s possible for managers and frontline workers to create interactive training content by attaching photos, videos, and 3D models to information cards that stay with them while they work. When employees use Guides, information is collected to help managers understand how they’re doing and where they need help, further allowing people to improve the process.

Check out the video below to learn how PACCAR is exploring Guides and HoloLens to improve productivity and employee onboarding.

Get the full story at Hannover Messe

These solutions and customer stories are just a peek at how Microsoft Business Applications are helping transform manufacturing. If you are attending Hannover Messe, visit our booth to experience Dynamics 365 and Mixed Reality solutions firsthand, as well as chat with customers and partners.

Find more information about our location and sessions in this schedule, and be sure to check out the resources below.

post

Closing the skills gap in manufacturing with Microsoft 365

The manufacturing industry is being transformed by the rise in new digital industrial technology, known as Industry 4.0. New technologies are changing every stage of production, increasing productivity, optimizing operations, and unlocking new areas of growth. In order for manufacturers to capture the value this technology unlocks, they’ll need to ensure their workforce has the right skills and the right tools.

This is especially true as it relates to an organization’s Firstline Workforce. In manufacturing, Firstline Workers are the employees who deliver products and materials, drive product quality, and keep critical equipment running. To help manufacturers with their digital transformation, we’re enabling new ways to work with Microsoft 365 for Firstline Workers to learn, communicate, and collaborate more effectively.

Upskilling and equipping the Firstline Workforce

With the rise of Industry 4.0, manufacturers must reimagine the roles, skills, and tools to transform work throughout their organization. This means providing digital and soft skills, empowering workers with modern tools, and blurring the boundaries of technology with new immersive experiences. In an increasingly digital and complex landscape, the types of skills that employees need are rapidly evolving, and it is increasingly difficult for the workforce to keep pace.

Solutions in Microsoft 365 that enable Firstline Workers to learn, communicate, and collaborate include:

  • Using Microsoft Teams and SharePoint Online, manufacturers can securely centralize training efforts, easily distribute onboarding and training materials, and connect all levels of the organization to find and share best practices.

  • Using Microsoft Stream, organizations can deliver dynamic, role-based content and video to increase engagement and retention of training programs and support peer-to-peer information sharing.

To help equip workers to operate in a digitally-enabled manufacturing environment, Teams provides a single hub for teamwork to communicate, collaborate, and coordinate production from the engineering rooms to the factory floor.

  • Earlier this year, we announced new capabilities—including urgent messaging, location sharing, and image annotations—which organizations can use to create a safer and more efficient workplace. For example, these features can help workers identify, communicate, and share the location of hazardous spills to help reduce operational disruptions.

Image of three phones displaying urgent messaging, location sharing, and image annotations in Teams.

  • Additionally, Microsoft Teams is extensible and allows companies to transform business processes using Microsoft Flow and PowerApps. These services help to digitize everyday activities—such as documentation during quality assurance, data capture, and inventory management—helping reduce costs and free up time for Firstline Workers to focus on higher value activities.

As Industry 4.0 reshapes the manufacturing industry, finding new innovations to help workers learn, communicate, and collaborate remains a top priority. Microsoft is addressing these challenges through breakthroughs in hardware design, artificial intelligence (AI) experiences, mixed reality with HoloLens 2, and through business-ready solutions with Dynamics 365 and industry partners.

  • Using Dynamics 365 Remote Assist, technicians can solve problems faster by calling in remote experts via Microsoft Teams to help walk through repairs using mixed reality annotations, sharing diagrams and schematics. And with Dynamics 365 Guides, employees can learn new skills with step-by-step instructions that guide employees to the tools they need and how to use them in real work situations.

Helping our customers succeed

Leading manufacturers choose Microsoft 365 to prepare, equip, and empower their employees at all levels:

To accelerate productivity and information flow, Cummins replaced its existing productivity and collaboration tools with Microsoft 365, introducing a modern knowledge management and collaboration framework to reduce skills gaps and anchor a new culture of work.

Our modern, tech-driven workplaces give employees the tools they need to innovate, so we can introduce new energy products and technology solutions to the market. It’s also a key strategy in attracting top talent.”
—Sherry Aaholm, VP and CIO for Cummins

Goodyear is using the integrated and adaptive tools in Microsoft 365 to help accelerate innovation and enable new capabilities inside the company. For example, Goodyear is connecting its workforce via tools like Teams, which is driving productivity and generating efficiencies to deliver the right products to the right place at the right time.

“Enhancing collaboration is crucial to us for improved decision making and to drive innovation, both in tires and beyond tires… Our multigenerational and multicultural global workforce is now sharing perspectives and ideas more quickly and easily than ever.”
—Sherry Neubert, CIO for The Goodyear Tire & Rubber Company

We’re incredibly excited about our opportunity to help manufacturers transform and we are just getting started!

Join us at Hannover Messe and learn more

Next week, members of the Microsoft team will be at Hannover Messe, the annual manufacturing conference. Visit us at Microsoft stand C40 and learn how Microsoft is enabling Intelligent Manufacturing.

post

Re-reading ASP.Net Core request bodies with EnableBuffering()

Avatar

Jeremy

In some scenarios there’s a need to read the request body multiple times. Some examples include

  • Logging the raw requests to replay in load test environment
  • Middleware that read the request body multiple times to process it

Usually Request.Body does not support rewinding, so it can only be read once. A straightforward solution is to save a copy of the stream in another stream that supports seeking so the content can be read multiple times from the copy.

In ASP.NET framework it was possible to read the body of an HTTP request multiple times using HttpRequest.GetBufferedInputStream method. However, in ASP.NET Core a different approach must be used.

In ASP.NET Core 2.1 we added an extension method EnableBuffering() for HttpRequest. This is the suggested way to enable request body for multiple reads. Here is an example usage in the InvokeAsync() method of a custom ASP.NET middleware:

public async Task InvokeAsync(HttpContext context, RequestDelegate next)
{ context.Request.EnableBuffering(); // Leave the body open so the next middleware can read it. using (var reader = new StreamReader( context.Request.Body, encoding: Encoding.UTF8, detectEncodingFromByteOrderMarks: false, bufferSize: bufferSize, leaveOpen: true)) { var body = await reader.ReadToEndAsync(); // Do some processing with body… // Reset the request body stream position so the next middleware can read it context.Request.Body.Position = 0; } // Call the next delegate/middleware in the pipeline await next(context);
}

The backing FileBufferingReadStream uses memory stream of a certain size first then falls back to a temporary file stream. By default the size of the memory stream is 30KB. There are also other EnableBuffering() overloads that allow specifying a different threshold, and/or a limit for the total size:

public static void EnableBuffering(this HttpRequest request, int bufferThreshold) public static void EnableBuffering(this HttpRequest request, long bufferLimit) public static void EnableBuffering(this HttpRequest request, int bufferThreshold, long bufferLimit)

For example, a call of

context.Request.EnableBuffering(bufferThreshold: 1024 * 45, bufferLimit: 1024 * 100);

enables a read buffer with limit of 100KB. Data is buffered in memory until the content exceeds 45KB, then it’s moved to a temporary file. By default there’s no limit on the buffer size but if there’s one specified and the content of request body exceeds the limit, an System.IOException will be thrown.

These overloads offer flexibility if there’s a need to fine-tune the buffering behaviors. Just keep in mind that:

  • Even though the memory stream is rented from a pool, it still has memory cost associated with it.
  • After the read is over the bufferThreshold the performance will be slower since a file stream will be used.
Avatar
Jeremy Meng

Software Development Engineer

Follow Jeremy   

<!–


–>

post

ASP.NET Core updates in .NET Core 3.0 Preview 2

.NET Core 3.0 Preview 2 is now available and it includes a bunch of new updates to ASP.NET Core.

Here’s the list of what’s new in this preview:

  • Razor Components
  • SignalR client-to-server streaming
  • Pipes on HttpContext
  • Generic host in templates
  • Endpoint routing updates

Get started

To get started with ASP.NET Core in .NET Core 3.0 Preview 2 install the .NET Core 3.0 Preview 2 SDK

If you’re on Windows using Visual Studio, you’ll also want to install the latest preview of Visual Studio 2019.

Upgrade an existing project

To upgrade an existing an ASP.NET Core app to .NET Core 3.0 Preview 2, follow the migrations steps in the ASP.NET Core docs.

Add package for Json.NET

As part of the work to tidy up the ASP.NET Core shared framework, Json.NET is being removed from the shared framework and now needs to be added as a package.

To add back Json.NET support to an ASP.NET Core 3.0 project:

Runtime compilation removed

As a consequence of cleaning up the ASP.NET Core shared framework to not depend on Roslyn, support for runtime compilation of pages and views has also been removed in this preview release. Instead compilation of pages and views is performed at build time. In a future preview update we will provide a NuGet packages for optionally enabling runtime compilation support in an app.

Other breaking changes and announcements

For a full list of other breaking changes and announcements for this release please see the ASP.NET Core Announcements repo.

Build modern web UI with Razor Components

Razor Components are a new way to build interactive client-side web UI with ASP.NET Core. This release of .NET Core 3.0 Preview 2 adds support for Razor Components to ASP.NET Core and for hosting Razor Components on the server. For those of you who have been following along with the experimental Blazor project, Razor Components represent the integration of the Blazor component model into ASP.NET Core along with the server-side Blazor hosting model. ASP.NET Core Razor Components is a new capability in ASP.NET Core to host Razor Components on the server over a real-time connection.

Working with Razor Components

Razor Components are self-contained chunks of user interface (UI), such as a page, dialog, or form. Razor Components are normal .NET classes that define UI rendering logic and client-side event handlers, so you can write rich interactive web apps without having to write any JavaScript. Razor components are typically authored using Razor syntax, a natural blend of HTML and C#. Razor Components are similar to Razor Pages and MVC Views in that they both use Razor. But unlike pages and views, which are built around a request/reply model, components are used specifically for handling UI composition.

To create, build, and run your first ASP.NET Core app with Razor Components run the following from the command line:

dotnet new razorcomponents -o WebApplication1
cd WebApplication1
dotnet run

Or create an ASP.NET Core Razor Components in Visual Studio 2019:

Razor Components template

The generated solution has two projects: a server project (WebApplication1.Server), and a project with client-side web UI logic written using Razor Components (WebApplication1.App). The server project is an ASP.NET Core project setup to host the Razor Components.

Razor Components solution

Why two projects? In part it’s to separate the UI logic from the rest of the application. There is also a technical limitation in this preview that we are using the same Razor file extension (.cshtml) for Razor Components that we also use for Razor Pages and Views, but they have different compilation models, so they need to kept separate. In a future preview we plan to introduce a new file extension for Razor Components (.razor) so that you can easily host your components, pages, and views all in the same project.

When you run the app you should see multiple pages (Home, Counter, and Fetch data) on different tabs. On the Counter page you can click a button to increment a counter without any page refresh. Normally this would require writing JavaScript, but here everything is written using Razor Components in C#!

Razor Components app

Here’s what the Counter component code looks like:

@page "/counter"

<h1>Counter</h1>

<p>Current count: @currentCount</p>

<button class="btn btn-primary" onclick="@IncrementCount">Click me</button>

@functions {
 int currentCount = 0;

 void IncrementCount()
 {
 currentCount+=1;
 }
}

Making a request to /counter, as specified by the @page directive at the top, causes the component to render its content. Components render into an in-memory representation of the render tree that can then be used to update the UI in a very flexible and efficient way. Each time the “Click me” button is clicked the onclick event is fired and the IncrementCount method is called. The currentCount gets incremented and the component is rendered again. The runtime compares the newly rendered content with what was rendered previously and only the changes are then applied to the DOM (i.e. the updated count).

You can use components from other components using an HTML-like syntax where component parameters are specified using attributes or child content. For example, you can add a Counter component to the app’s home page like this:

@page "/"

<h1>Hello, world!</h1>

Welcome to your new app.

<Counter />

To add a parameter to the Counter component update the @functions block to add a property decorated with the [Parameter] attribute:

@functions {
 int currentCount = 0;

 [Parameter] int IncrementAmount { get; set; } = 1;

 void IncrementCount()
 {
 currentCount+=IncrementAmount;
 }
}

Now you can specify IncrementAmount parameter value using an attribute like this:

<Counter IncrementAmount="10" />

The Home page then has it’s own counter that increments by tens:

Count by tens

This is just an intro to what Razor Components are capable of. Razor Components are based on the Blazor component model and they support all of the same features (parameters, child content, templates, lifecycle events, component references, etc.). To learn more about Razor Components check out the component model docs and try out building your first Razor Components app yourself.

Hosting Razor Components

Because Razor Components decouple a component’s rendering logic from how the UI updates get applied, there is a lot of flexibility in how Razor Components can be hosted. ASP.NET Core Razor Components in .NET Core 3.0 adds support for hosting Razor Components on the server in an ASP.NET Core app where all UI updates are handled over a SignalR connection. The runtime handles sending UI events from the browser to the server and then applies UI updates sent by the server back to the browser after running the components. The same connection is also used to handle JavaScript interop calls.

ASP.NET Core Razor Components

Alternatively, Blazor is an experimental single page app framework that runs Razor Components directly in the browser using a WebAssembly based .NET runtime. In Blazor apps the UI updates from the Razor Components are all applied in process directly to the DOM.

Blazor

Support for the client-side Blazor hosting model using WebAssembly won’t ship with ASP.NET Core 3.0, but we are working towards shipping it with a later release.

Regardless of which hosting model you use for your Razor Components, the component model is the same. The same Razor Components can be used with either hosting model. You can even switch your app back and forth from being a client-side Blazor app or Razor Components running in ASP.NET Core using the same components as long as your components haven’t taken any server specific dependencies.

JavaScript interop

Razor Components can also use client-side JavaScript if needed. From a Razor Component you can call into any browser API or into an existing JavaScript library running in the browser. .NET library authors can use JavaScript interop to provide .NET wrappers for JavaScript APIs, so that they can be conveniently called from Razor Components.

public class ExampleJsInterop
{
 public static Task<string> Prompt(this IJSRuntime js, string text)
 {
 // showPrompt is implemented in wwwroot/exampleJsInterop.js
 return js.InvokeAsync<string>("exampleJsFunctions.showPrompt", text);
 }
}
@inject IJSRuntime JS

<button onclick="@OnClick">Show prompt</button>

@functions {
 string name;

 async Task OnClick() {
 name = await JS.Prompt("Hi! What's you're name?");
 }
}

Both Razor Components and Blazor share the same JavaScript interop abstraction, so .NET libraries relying on JavaScript interop are usable by both types of apps. Check out the JavaScript interop docs for more details on using JavaScript interop and the Blazor community page for existing JavaScript interop libraries.

Sharing component libraries

Components can be easily shared and reused just like you would normal .NET classes. Razor Components can be built into component libraries and then shared as NuGet packages. You can find existing component libraries on the Blazor community page.

The .NET Core 3.0 Preview 2 SDK doesn’t include a project template for Razor Component Class Libraries yet, but we expect to add one in a future preview. In meantime, you can use Blazor Component Class Library template.

dotnet new -i Microsoft.AspNetCore.Blazor.Templates
dotnet new blazorlib

In this preview release ASP.NET Core Razor Components don’t yet support using static assets in component libraries, so the support for component class libraries is pretty limited. However, in a future preview we expect to add this support for using static assets from a library just like you can in Blazor today.

Integration with MVC Views and Razor Pages

Razor Components can be used with your existing Razor Pages and MVC apps. There is no need to rewrite existing views or pages to use Razor Components. Components can be used from within a view or page. When the page or view is rendered, any components used will be prerendered at the same time.

To render a component from a Razor Page or MVC View in this release, use the RenderComponentAsync<TComponent> HTML helper method:

<div id="Counter">
 @(await Html.RenderComponentAsync<Counter>(new { IncrementAmount = 10 }))
</div>

Components rendered from pages and views will be prerendered, but are not yet interactive (i.e. clicking the Counter button doesn’t do anything in this release). This will get addressed in a future preview, along with adding support for rendering components from pages and views using the normal element and attribute syntax.

While views and pages can use components the converse is not true: components can’t use views and pages specific features, like partial views and sections. If you want to use a logic from partial view in a component you’ll need to factor that logic out first as a component.

Cross platform tooling

Visual Studio 2019 comes with built-in editor support for Razor Components including completions and diagnostics in the editor. You don’t need to install any additional extensions.

Razor Components tooling

Razor Component tooling isn’t available yet in Visual Studio for Mac or Visual Studio Code, but it’s something we are actively working on.

A bright future for Blazor

In parallel with the ASP.NET Core 3.0 work, we will continue ship updated experimental releases of Blazor to support hosting Razor Components client-side in the browser (we’ll have more to share on the latest Blazor update shortly!). While in ASP.NET Core 3.0 we will only support hosting Razor Components in ASP.NET Core, we are also working towards shipping Blazor and support for running Razor Components in the browser on WebAssembly in a future release.

SignalR client-to-server streaming

With ASP.NET Core SignalR we added Streaming support, which enables streaming return values from server-side methods. This is useful for when fragments of data will come in over a period of time.

With .NET Core 3.0 Preview 2 we’ve added client-to-server streaming. With client-to-server streaming, your server-side methods can take instances of a ChannelReader<T>. In the C# code sample below, the StartStream method on the Hub will receive a stream of strings from the client.

public async Task StartStream(string streamName, ChannelReader<string> streamContent)
{
 // read from and process stream items
 while (await streamContent.WaitToReadAsync(Context.ConnectionAborted))
 {
 while (streamContent.TryRead(out var content))
 {
 // process content
 }
 }
}

Clients would use the SignalR Subject (or an RxJS Subject) as an argument to the streamContent parameter of the Hub method above.

let subject = new signalR.Subject();
await connection.send("StartStream", "MyAsciiArtStream", subject);

The JavaScript code would then use the subject.next method to handle strings as they are captured and ready to be sent to the server.

subject.next("example");
subject.complete();

Using code like the two snippets above, you can create real-time streaming experiences. For a preview of what you can do with client-side streaming with SignalR, take a look at the demo site, streamr.azurewebsites.net. If you create your own stream, you can stream ASCII art representations of image data being captured by your local web cam to the server, where it will be bounced out to other clients who are watching your stream.

Client-to-server Streaming with SignalR

System.IO.Pipelines on HttpContext

In ASP.NET Core 3.0, we’re working on consuming the System.IO.Pipelines API and exposing it in ASP.NET Core to allow you to write more performant applications.

In Preview 2, we’re exposing the request body pipe and response body pipe on the HttpContext that you can directly read from and write to respectively in addition to maintaining the existing Stream-based APIs.
While these pipes are currently just wrappers over the existing streams, we will directly expose the underlying pipes in a future preview.

Here’s an example that demonstrates using both the request body and response body pipes directly.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
 if (env.IsDevelopment())
 {
 app.UseDeveloperExceptionPage();
 }

 app.UseRouting(routes =>
 {
 routes.MapGet("/", async context =>
 {
 await context.Response.WriteAsync("Hello World");
 });

 routes.MapPost("/", async context =>
 {
 while (true)
 {
 var result = await context.Request.BodyPipe.ReadAsync();
 var buffer = result.Buffer;

 if (result.IsCompleted)
 {
 break;
 }

 context.Request.BodyPipe.AdvanceTo(buffer.End);
 }
 });
 });
}

Generic host in templates

The templates have been updated to use the Generic Host instead of WebHostBuilder as they have in the past:

public static void Main(string[] args)
{
 CreateHostBuilder(args).Build().Run();
}

public static IHostBuilder CreateHostBuilder(string[] args) =>
 Host.CreateDefaultBuilder(args)
 .ConfigureWebHostDefaults(webBuilder =>
 {
 webBuilder.UseStartup<Startup>();
 });

This is part of the ongoing plan started in 2.0 to better integrate ASP.NET Core with other server scenarios that are not web specific.

What about IWebHostBuilder?

The IWebHostBuilder interface that is used with WebHostBuilder today will be kept, and is the type of the webBuilder used in the sample code above. We intend to deprecate and eventually remove WebHostBuilder itself as its functionality will be replaced by HostBuilder, though the interface will remain.

The biggest difference between WebHostBuilder and HostBuilder is that you can no longer inject arbitrary services into your Startup.cs. Instead you will be limited to the IHostingEnvironment and IConfiguration interfaces. This removes a behavior quirk related to injecting services into Startup.cs before the ConfigureServices method is called. We will publish more details on the differences between WebHostBuilder and HostBuilder in a future deep-dive post.

Endpoint routing updates

We’re excited to start introducing more of the Endpoint Routing story that began in 2.2. Endpoint routing allows frameworks like MVC as well as other routable things to mix with middleware in a way that hasn’t been possible before. This is now present in the project templates in 3.0.0-preview-2, we’ll continue to add more richness as we move closer to a final release.

Here’s an example:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
 if (env.IsDevelopment())
 {
 app.UseDeveloperExceptionPage();
 }

 app.UseStaticFiles();

 app.UseRouting(routes =>
 {
 routes.MapApplication();

 routes.MapGet("/hello", context =>
 {
 return context.Response.WriteAsync("Hi there!"); 
 });

 routes.MapHealthChecks("/healthz");
 });

 app.UseAuthentication();
 app.UseAuthorization();
}

There’s a few things to unpack here.

First, the UseRouting(...) call adds a new Endpoint Routing middleware. UseRouting is at the core of many of the templates in 3.0 and replaces many of the features that were implemented inside UseMvc(...) in the past.

Also notice that inside UseRouting(...) we’re setting up a few things. MapApplication() brings in MVC controllers and pages for routing. MapGet(...) shows how to wire up a request delegate to routing. MapHealthChecks(...) hooks up the health check middleware, but by plugging it into routing.

What might be surprising to see is that some middleware now come after UseRouting. Let’s tweak this example to demonstrate why that is valuable.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
 if (env.IsDevelopment())
 {
 app.UseDeveloperExceptionPage();
 }

 app.UseStaticFiles();

 app.UseRouting(routes =>
 {
 routes.MapApplication();

 routes.MapGet("/hello", context =>
 {
 return context.Response.WriteAsync("Hi there! Here's your secret message"); 
 })
 .RequireAuthorization(new AuthorizeAttribute(){ Roles = "secret-messages", });

 routes.MapHealthChecks("/healthz").RequireAuthorization("admin");
 });

 app.UseAuthentication();
 app.UseAuthorization();
}

Now I’ve added an AuthorizeAttribute to my request delegate. This is just like placing [Authorize(Roles = "secret-messages")] on an action method in a controller. We’ve also given the health checks middleware an authorization policy as well (by policy name).

This works because the following steps happen in order (ignoring what happens before routing):

  1. UseRouting(...) makes a routing decision – selecting an Endpoint
  2. UseAuthorization() looks at the Endpoint that was selected and runs the corresponding authorization policy
  3. hidden… At the end of the middleware pipeline the Endpoint is executed (if no endpoint was matched then a 404 response is returned)

So think of UseRouting(...) as making a deferred routing decision – where middleware that appear after it run in the middle. Any middleware that run after routing can see the results and read or modify the route data and chosen endpoint. When processing reaches the end of the pipeline, then the endpoint is invoked.

What is an Endpoint and why did we add this?

An Endpoint is a new primitive to help frameworks (like MVC) be friends with middleware. Fundamentally an Endpoint is a request delegate (something that can execute) plus a bag of metadata (policies).

Here’s an example middleware – you can use this to examine endpoints in the debugger or by printing to the console:

app.Use(next => (context) =>
{
 var endpoint = context.GetEndpoint();
 if (endpoint != null)
 {
 Console.WriteLine("Name: " + endpoint.DisplayName);
 Console.WriteLine("Route: " + (endpoint as RouteEndpoint)?.RoutePattern);
 Console.WriteLine("Metadata: " + string.Join(", ", endpoint.Metadata));
 }

 return next(context);
});

In the past we haven’t had a good solution when we’ve wanted to implement a policy like CORS or Authorization in both middleware and MVC. Putting a middleware in the pipeline feels very good because you get to configure the order. Putting filters and attributes on methods in controllers feels really good when you need to apply policies to different parts of the application. Endpoints bring togther all of these advantages.

As an addition problem – what do you do if you’re writing the health checks middleware? You might want to secure your middleware in a way that developers can customize. Being able to leverage the ASP.NET Core features for this directly avoids the need to build in support for cross-cutting concerns in every component that serves HTTP.

In addition to removing code duplication from MVC, the Endpoint + Middleware solution can be used by any other ASP.NET Core-based technologies. You don’t even need to use UseRouting(...) – all that is required to leverage the enhancements to middleware is to set an Endpoint on the HttpContext.

What’s integrated with this?

We added the new authorize middleware so that you can start doing more powerful security things with just middleware. The authorize middleware can accept a default policy that applies when there’s no endpoint, or the endpoint doesn’t specify a policy.

CORS is also now endpoint routing aware and will use the CORS policy specified on an endpoint.

MVC also plugs in to endpoint routing and will create endpoints for all of your controllers and pages. MVC can now be used with the CORS and authorize features and will largely work the same. We’ve long had confusion about whether to use the CORS middleware or CORS filters in MVC, the updated guidance is to use both. This allows you to provide CORS support to other middleware or static files, while still applying more granular CORS policies with the existing attributes.

Health checks also provide methods to register the health checks middleware as a router-ware (as shown above). This allows you to specify other kinds of policies for health checks.

Finally, new in ASP.NET Core 3.0 preview 2 is host matching for routes. Placing the HostAttribute on an MVC controller or action will prompt the routing system to require the specified domain or port. Or you can use RequireHost in your Startup.cs:

app.UseRouting(routes =>
{
 routes.MapGet("/", context => context.Response.WriteAsync("Hi Contoso!"))
 .RequireHost("contoso.com");

 routes.MapGet("/", context => context.Response.WriteAsync("Hi AdventureWorks!"))
 .RequireHost("adventure-works.com");

 routes.MapHealthChecks("/healthz").RequireHost("*:8080");
});

Do you think that there are things that are missing from the endpoint story? Are there more things we should make smarter or more integrated? Please let us know what you’d like to see.

Give feedback

We hope you enjoy the new features in this preview release of ASP.NET Core! Please let us know what you think by filing issues on Github.

post

Microsoft’s Redmond campus modernization update: Demolition begins

Stack of white hardhats bearing the Microsoft logo

Today marks the next milestone in Microsoft’s campus modernization effort. Deconstruction is beginning, and buildings will start coming down.

When the project is complete, the new campus will provide a modern workplace and create greater collaboration and community. To commemorate the original buildings, the company offered an exclusive opportunity for a group of employees to say goodbye to the original campus with a demolition party to kick off the destruction. On Tuesday, one employee and nine of his teammates (who collectively donated money to charity to win the experience via the Employee Giving Campaign auction) took to the company’s first buildings equipped with hard hats, sledgehammers and “the claw.” Check out some highlights from the fun below:

“It is great to see the interest and excitement from employees for the campus modernization,” said Michael Ford, Microsoft general manager of global real estate and security. “Our employees are crucial to building an exceptional place to work, and this event was a great way to kick off this journey together.”

Moving forward

Over the next few months, Microsoft will continue the decommissioning and demolition of 12 buildings, embracing sustainable practices throughout the process.

In 2016, Microsoft became the first technology company in the U.S. to be certified Zero Waste for diverting at least 90 percent of its waste from the landfill. The company’s goal with this project is to remain in line with this certification for construction materials and divert a majority of building materials from the landfill. This means focusing on reusing, donating and recycling. From concrete and steel framing to carpets, ceiling tiles, electronic and networking gear, interior debris and loose assets like furniture, chairs and whiteboards, to even the artificial turf outside — most of the materials in the old spaces will find a new life.

“We strive to make a positive impact on the community,” Ford said. “We’re putting a lot of effort behind finding innovative ways to reduce our impact and optimize our resource usage.”

Beyond what is being recycled, the company is also considering where materials will be processed. To maximize sustainability, Microsoft’s construction team is engaging with local waste processing and recycling companies to study and prioritize the hauling distances to further shrink the project’s construction carbon footprint.

“Corporate and environmental responsibility are equally as important as budget and schedule — and we are aligning our design and construction practices with Microsoft’s global corporate responsibility and sustainability missions,” Ford said. “It feels good to know that here in our hometown we’re supporting this vision.”

Follow updates and developments as this project progresses on Microsoft’s Modern Campus site.