Posted on Leave a comment

.NET Day on Agentic Modernization Coming Soon

Join us on December 9, 2025 between 9AM-1PM Pacific for .NET Day of Agentic Modernization! This is a free, one-day virtual event focused on the latest tooling, techniques, and guidance for modernizing your .NET applications. Whether you’re upgrading legacy code, preparing for cloud workloads, or exploring how AI and agentic patterns fit into your architecture, this event will show you what’s possible with today’s tooling while keeping reliability, security, and developer control front and center. Buckle up because we’ll be demo heavy and you can get your questions answered live!

.NET Day on Agentic Modernization event banner

Agenda

We have 8 great sessions throughout the event that will be broadcast live. Tune in and get your questions answered from the presenters!

🚀 Choose Your Modernization Adventure with GitHub Copilot with Brady Gaster – See how GitHub Copilot and Visual Studio speed app modernization- upgrading code, fixing dependencies, and guiding secure, cloud-ready migrations into Azure.

⚙️ Agentic DevOps: Enhancing .NET Web Apps with Azure MCP with Yun Jung Choi – Learn how AI-powered tooling and Azure MCP streamline .NET app development-code, storage, SQL, and IaC workflows-with faster, smarter Azure-ready delivery.

🛡️ Fix It Before They Feel It: Proactive .NET Reliability with Azure SRE Agent with Deepthi Chelupati and Shamir Abdul Aziz – See how Azure SRE Agent and App Insights detect .NET regressions, automate rollbacks, and streamline incident prevention with custom agents and health checks.

☁️ No‑Code Modernization for ASP.NET with Managed Instance on Azure App Service with Andrew Westgarth and Gaurav Seth – See how Azure App Service Managed Instance removes ASP.NET migration blockers-enabling fast modernization, better performance, lower TCO, and integration with modern agentic workflows.

🤖 Modernization Made Simple: Building Agentic Solutions in .NET with Bruno Capuano – Learn how to add the Agent Framework to existing .NET apps – unlocking multi-agent collaboration, memory, and tool orchestration with practical, fast-start guidance.

💪 Bulletproof Agents with the Durable Task Extension for Microsoft Agent Framework with Chris Gillum and Thiago Almeida – See how the Durable Extension for the Microsoft Agent Framework brings durable, distributed, deterministic, and debuggable AI agents to Azure—enabling reliable, scalable, production-ready agentic workflows.

🔐 Securely Unleash AI Agents on Azure SQL and SQL Server with Davide Mauri – Learn how to let AI agents work safely with Azure SQL – enforcing strict security, least-privilege access, and schema-aware techniques that prevent data leaks and query errors.

Secure and Smart AI Agents Powered by Azure Redis with Catherine Wang – See how Azure Redis powers secure, streamlined data access for .NET agentic apps – using MCP, Redis-backed tools, and modern security patterns to simplify development.

Tune in

Don’t miss this opportunity to get practical, real-world guidance on modernizing .NET applications for Azure, AI, and agentic patterns. Mark your calendars and get ready for .NET Day on Agentic Modernization!

Posted on Leave a comment

Microsoft and Oracle expand partnership to deliver Oracle Database Services on Oracle Cloud Infrastructure in Microsoft Azure

Microsoft joins Oracle as the only other hyperscaler to offer Oracle Cloud Infrastructure Database Services to simplify cloud migration, multicloud deployment and management

Microsoft and Oracle logos

Austin, TX and Redmond, WA — September 14, 2023 Oracle Corp and Microsoft Corp today announced Oracle Database@Azure, which gives customers direct access to Oracle database services running on Oracle Cloud Infrastructure (OCI) and deployed in Microsoft Azure datacenters.

Oracle Database@Azure delivers all the performance, scale, and workload availability advantages of Oracle Database on OCI with the security, flexibility, and best-in-class services of Microsoft Azure, including best-in-class AI services like Azure OpenAI. This combination provides customers with more flexibility regarding where they run their workloads. It also provides a streamlined environment that simplifies cloud purchasing and management between Oracle Database and Azure services.

With the introduction of Oracle Database@Azure, Oracle and Microsoft are helping customers accelerate their migration to the cloud, so they can modernize their IT environments and take advantage of Azure’s infrastructure, tooling, and services. Customers will benefit from:

  • More options to move their Oracle databases to the cloud;
  • The highest level of Oracle database performance, scale, and availability, as well as feature and pricing parity;
  • The simplicity, security, and latency of a single operating environment (datacenter) within Azure;
  • The ability to build new cloud native applications using OCI and Azure technologies, including Azure’s best-in-class AI services;
  • The assurance of an architecture that is tested and supported by two of the most trusted names in the cloud.

“We have a real opportunity to help organizations bring their mission-critical applications to the cloud so they can transform every part of their business with this next generation of AI,” said Satya Nadella, Chairman and CEO, Microsoft. “Our expanded partnership with Oracle will make Microsoft Azure the only other cloud provider to run Oracle’s database services and help our customers unlock a new wave of cloud-powered innovation.”

“Most customers already use multiple clouds,” said Larry Ellison, Oracle Chairman and CTO. “Microsoft and Oracle have been working together to make it easy for those customers to seamlessly connect Azure Services with the very latest Oracle Database technology. By collocating Oracle Exadata hardware in Azure datacenters, customers will experience the best possible database and network performance. We are proud to partner with Microsoft to deliver this best-in-class capability to customers.”

Multicloud Made for Customers

The new service delivers a fully integrated experience for deploying, managing, and using Oracle database instances within Azure. It enables organizations to drive breakthroughs in the cloud using their existing skills to leverage the best of Oracle and Microsoft capabilities directly within the Azure portal.

The new service is designed to eliminate customers’ biggest challenges in adopting multicloud architectures, including disjointed management, siloed tools, and a complex purchasing process.

As a result of this expanded partnership, customers will have the choice to deploy their Azure services with their fully managed Oracle Database services all within a single datacenter, including support for Oracle Exadata Database services, Oracle Autonomous Database services, and Oracle Real Application Clusters (RAC). Oracle and Microsoft have also developed a joint support model to provide rapid response and resolution for mission-critical workloads.

Additionally, Oracle and Microsoft have significantly simplified the purchasing and contracting process. Customers will be able to purchase Oracle Database@Azure through Azure Marketplace, leveraging their existing Azure agreements. They will also be able to use their existing Oracle Database license benefits including Bring Your Own License and the Oracle Support Rewards program.

“As we continue our digital transformation through innovation and technology, interoperability across cloud service providers to enable safe, secure, and rapid financial transactions for our 40 million customers is paramount,” said Mihir Shah, enterprise head of data, Fidelity Investments. “Today’s announcement displays how industry leaders Microsoft and Oracle are putting their customers’ interests first and providing a collaborative solution that enables organizations like Fidelity to deliver best-in-class experiences for our customers and meet the substantial compliance and regulatory requirements with minimal downtime.”

“Data is the lifeblood of any business, and the cloud is the best way to analyze it so that insights become actionable,” said Magesh Bagavathi, senior vice president and global chief technology officer, PepsiCo. “As one of the largest food and beverage companies in the world with a market value of over 200 billion U.S. dollars, the ability to run our mission-critical systems and associated data in the cloud with Oracle Database@Azure gives us a scaled strategic advantage across our global operations.”

“We are looking to our technology partners to support Vodafone’s strategic focus on customers, simplicity and growth across Europe and Africa,” said Scott Petty, Chief Technology Officer, Vodafone. “This new offering from Oracle and Microsoft does that by enabling us to deliver innovative and differentiated digital services faster and more cost effectively to our customers.”

“As a global leader in the financial services industry, Voya has harnessed the power of digital transformation to help provide the best experience for our customers and employees. As we continue to bring our business applications to the cloud, cloud partnerships have the potential to help the entire industry maintain better security, compliance, and performance, helping to accelerate the development of new technology products, solutions, and services that enhance customer experience and help achieve better financial outcomes,” said Santhosh Keshavan, executive vice president and chief information officer, Voya Financial, Inc.

Oracle will operate and manage these OCI services directly within Microsoft’s datacenters globally, beginning with regions in North America and Europe.

For more information on how to get started with Oracle Database@Azure, visit our Microsoft and Oracle solutions page.

Additional Resources

Contact Info

Oracle PR

Carolin Bachmann
[email protected]
+1.415.622.8466

Microsoft Media Relations

WE Communications for Microsoft
[email protected]
+1.425.638.7777 

About Oracle
Oracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About Microsoft
Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

Trademarks
Oracle, Java, MySQL and NetSuite are registered trademarks of Oracle Corporation. NetSuite was the first cloud company—ushering in the new era of cloud computing.

“Safe Harbor” Statement: Statements in this press release relating to Oracle’s future plans, expectations, beliefs, intentions and prospects, including statements regarding expected benefits of Oracle Database@Azure and its best-in-class capability, are “forward-looking statements” and are subject to material risks and uncertainties. Risks and uncertainties that could affect our current expectations and our actual results, include, among others: our ability to develop new products and services, integrate acquired products and services and enhance our existing products and services; our management of complex cloud and hardware offerings, including the sourcing of technologies and technology components; significant coding, manufacturing or configuration errors in our offerings; risks associated with acquisitions; economic, political and market conditions; information technology system failures, privacy and data security concerns; cybersecurity breaches; unfavorable legal proceedings, government investigations, and complex and changing laws and regulations. A detailed discussion of these factors and other risks that affect our business is contained in our SEC filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading “Risk Factors.” Copies of these filings are available online from the SEC or by contacting Oracle’s Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on the Oracle Investor Relations website at www.oracle.com/investor/. All information set forth in this press release is current as of September 14, 2023. Oracle undertakes no duty to update any statement in light of new information or future events.

Posted on Leave a comment

App Innovation Report: Prioritizing human pain points to build high-impact experiences

In October, our Microsoft Ignite event drove incredible dialogue around how organizations can navigate a challenging market. Digital perseverance as encapsulated so well by Judson Althoff, our executive vice president, and chief commercial officer, resonates deeply with us across the Azure business and is sharpening how we think of strategic innovation in the face of headwinds.

There’s that word: Innovation. It continues to mean so many things to so many and, honestly, innovation becomes harder to distill amid constrained resources and near-term pressure. As we close 2022, reflecting on our own transformation, we remain inspired by a holistic cloud strategy; one which balances what’s coming around the corner with what lies ahead. Now more than ever, continuing to innovate in the right places is essential to maintaining a productivity edge internally, and strengthening differentiation externally. Your app estate is on the front lines of it all.

Last week, we shared insights from a range of cloud customers who plan to concentrate on migration and modernization plans for business resilience. Now I want to hone in on what kinds of modern experiences are in high demand to help organizations prioritize. Today, we’re releasing our first App Innovation Report, which identifies where digital investments can root innovation in human pain points. It all comes back to what problems need solving and looking at return on investment from another critical angle: the people who technology serves. As different as we are and as complex as our digital world has become, we psychologically seek a lot of the same things. We each value flow, integration, simplicity, efficiency, protection, freedom and fulfillment. By tapping into these common threads, businesses can deliver experiences people enjoy and keep coming back to.

Foundational security and integration concerns rise to the top

This fall we studied over 1,500 people working on the front lines of business challenges in healthcare, manufacturing and retail, and while we spoke to them as employees, we considered their perspectives as patients and shoppers too; the business-to-business-to-consumer (B2B2C) duality to our application expectations is pretty fascinating. Through interviews and surveys in five innovation hubs — the United States, Norway, Brazil, Japan and India — we applied social science to the state of the digital experience. It felt good to take a step back and ask our customers’ customers one simple question: What do you want and need from apps today? The answers surprised us — not because they were grandiose or futuristic but because they were foundational.

App Innovation Report graphic
Top 16 forces of app innovation: “Promise scores” above reflect a combination of people’s rational preference and emotional passion. Scores are indexed against an average of 100, so scores above 100 represent the most promising innovation areas.

People feel stuck, frustrated and lost because of disjointed apps

People around the world in a vast range of roles expressed the most passion about disruption to their workflow. On the receiving side of apps of all kinds, there’s palpable frustration from having to manually patch together disparate tasks and tools. We learned people are struggling to secure data automatically, to streamline repetitive input, and to have well-informed interactions with others. While we heard desire for immersive technologies bubbling to the forefront, it wasn’t as prominent as pragmatic everyday realities and, from an Azure lens these areas signal opportunities to help advance AI automation, API management and app intelligence. Whether you’re migrating workloads, modernizing or building something new for flexibility and scale, the cloud can architecturally address so many of the trends we found.

Fear of compromising company data weighs on workers too

In the industries we researched, people share their companies’ concerns about data breaches on a personal, day-to-day level. In healthcare, they spoke about pressures surrounding HIPAA requirements when handling patient data. A sense of nervousness about security was also expressed in manufacturing, a highly targeted industry for ransomware, and retail workers worry about their accountability in handling customers’ financial information. Security is so incredibly important to us in Azure, so hearing from the usage and functionality side of this space emboldens us to partner on app development that can continue to advance confidence up and down organizations.

The cloud investments we make and the customization we code affect so many layers of a company and, in the applications space, innovation boils down to experiences people can count on and even enjoy using — it’s one of the most fulfilling aspects of the business we’re in. Well-architected workloads don’t only enable cost optimization, they provide a foundation of innovation where business leaders, technology and developer teams can adapt quickly to deliver secure, intelligent and intuitive apps people keep coming back to.

The need for more meaningful conversations stands out in retail

In light of the season, one industry-specific breakdown is especially timely. We see experience needs rank differently in retail compared to other cross-industry averages, with repetitive input surpassing automatic security, and desire for meaningful conversations breaking through in a way we don’t see elsewhere. Because shoppers today do so much research before visiting stores, retail workers want digital solutions that can help them provide information about product features, pricing, discounts and availability without breaking their connection to the customer. A sales engagement that feels jarring or closed off today could become warmer and interactive tomorrow.

App Innovation Report graphic
Retail-specific needs: “Promise scores” from retail workers above are also indexed against an average of 100. Scores as high as 141 and 133 indicate exceptionally strong rational preference and emotional passion.

Business leaders agree with worker concerns but need help prioritizing

Rounding out the research, we put our report in front of business leaders — those who choose Azure services and customers of other cloud providers. We also wanted to understand how chief technology, digital and operations officers are thinking about app design, and the themes voiced by workers were equally important to them. So, if workers have unmet needs like this and leadership is aligned, what’s next? Prioritization.

How we each define innovation is as unique as the cloud journeys we’re on but giving app experiences more shape helps us see strategic pockets in new ways and grounds cloud investments in the people who run your business, in the people you’re selling to. What kinds of inefficiencies are draining resources most immediately? Which growth territories will you target to compete? Organizations can’t tackle every app innovation project at once, but together, we can carve out data-driven spots that present the most promise.

App Innovation Report graphic
Spot the opportunities: People’s rational preference and emotional passion for different app innovation areas are mapped above in the context of satisfaction with existing solutions in the market and relevance in their lives. Clockwise from the top-right quadrant, the urgency of each innovation area is broken out for planning.

One of the many things that drew me to Azure’s business is how many people it touches. When we look at Azure customers like the NBA, H&M or Starbucks, we see how very tangible outcomes are in the palm of our own hands as consumers and suddenly, what the cloud makes possible begins to click no matter who you’re talking to. When I watch real-time data on my Boston Celtics, when I’m able to shop for loved ones in a single click, and I can enjoy an iconic red cup without waiting in line, I am experiencing cloud-first app investments that pay off in spades. From the back end to the front end of app design, we’re reminded that business impact of everything digital is highly human.

On behalf of the Azure team, we hope the App Innovation Report inspires your work going into the new year. And no matter how or what you celebrate, I wish you and your loved ones a safe and happy season.

Source: Future of App Innovation study conducted by Ipsos between July and September 2022 among 1,500 workers in healthcare, manufacturing and retail in the U.S., Brazil, India, Japan and Norway.

Tags: , , ,

Posted on Leave a comment

Customers want to know: How do I get more value from my data?

In this episode of “Digital Now,” two key architects of Microsoft’s intelligent data platform describe how Microsoft and its customers are undergoing a cultural shift in which data has become a complementary discipline alongside software and code.

“Digital Now” is a video series hosted by Andrew Wilson, chief digital officer at Microsoft, who invites friends and industry leaders inside and outside of Microsoft to share how they are tackling digital and business transformation, and explores themes like the future of work, security, artificial intelligence and the democratization of code and data.

Rohan Kumar, corporate vice president, Azure Data, and Karthik Ravindran, general manager, Enterprise Analytics and Data Governance, explain that data doesn’t belong to one particular team, but is an organizational asset that can be responsibly democratized to provide valuable insights.

“I think having data scientists who can generate real-time insight, having an organization that’s data-led not system-led are really powerful tenets of a modern strategy,” agrees Wilson.

Visit Digital Now on YouTube to view more episodes.

Posted on Leave a comment

Taste of success: FoodCloud uses technology to get surplus food to nonprofits more efficiently

Since 2013, FoodCloud has redistributed nearly 180 million meals across their two solutions in Ireland, the U.K. and parts of Europe, estimating it has kept more than 75,000 tons of food from going to waste and into landfills.

Tesco, the U.K.’s largest supermarket chain, decided to partner with FoodCloud in a pilot program with Tesco’s 146 stores in Ireland. The 2013 partnership was so successful, Tesco expanded it to its more than 3,000 stores in the U.K. The bulk of Tesco’s surplus food includes fresh fruit, vegetables and bakery products.

FoodCloud continued to refine its technology platform, Foodiverse, so that it was simple for both supermarkets and nonprofits to use, a huge plus for Tesco.

“Where they started from technology-wise to where they are now is light years apart,” says Lorraine Shiels, Tesco Ireland head of corporate social responsibility and internal communications. “They developed a solution that we saw could work and could integrate within our technology,” and it was something that any Tesco employee could easily use.

Scanning onions
A Tesco worker scans potential surplus food for donation. Photo by Tesco.

“Simplicity in retail, as in any business, is incredibly important for any sort of sustainability of process,” she says. “And the fact that the app they had developed was incredibly simple but achieved an end goal was really, really important to us.”

Foodiverse is hosted on Azure. Power BI also plays a key role in much of the internal reporting developed by FoodCloud. Now, the nonprofit is also incorporating Dynamics 365 Business Central to unlock other insights, including conducting stock counts and movements live on the floors of FoodCloud’s three hubs, and enabling prompts to highlight where there may be issues to resolve.

Dynamics 365 sped up FoodCloud’s processes significantly and so far has helped contribute to an 11% increase in surplus food redistribution, year over year

FoodCloud is fully integrated into Tesco’s technology systems in stores, Shiels says. “We can look to absolutely every item of food that we scan through in the evening to donate is trackable and traceable, so that we’re fully able to measure end-to-end our donations – the amount of meals that we donate, the kilos, broken down by store, the carbon footprint associated with it. There’s a great level of insight and reporting behind it from a business perspective.”

FoodCloud is also now working with Tesco in central Europe, including the Czech Republic and in Slovakia. It also has partnerships with other supermarket chains including Aldi, Dunnes Stores, Lidl, Musgrave MarketPlace and Waitrose, and international food companies including Kellogg’s.

Kellogg’s began working with FoodCloud in Ireland in 2020, donating surplus breakfast cereals and breakfast bars. The company has a long history of donating food to families and to schools’ “breakfast clubs” in both Ireland and the U.K.

Men loading a truck
Deliveries are loaded onto a FoodCloud truck at its Dublin warehouse, or hub. Photo by Chris Welsch for Microsoft.

“We know our food is very popular amongst FoodCloud’s beneficiary organizations,” says Kate Prince, senior ESG (environmental, social and governance) manager for Kellogg Europe. “For many families, obviously it’s a very convenient and quick breakfast.”

And it doesn’t require heat to eat, which is becoming more and more important now. “Many people are struggling with rising energy costs, and so for those families, breakfast cereal is a good option,” she says.

Kellogg’s is also doing a “significant rethink” of how to “overcome the challenges facing today’s food system,” Nigel Hughes, Kellogg senior vice president of Global R&D and Innovation, wrote in a recent blog post. “We must move from a linear approach to a circular one that prioritizes regenerative production, reduces resource inputs and aims to ensure recovery for future uses and minimize wastage.”

In the U.K., FoodCloud works with FareShare, the national network for charitable food redistributors. FareShare sorts surplus food in regional warehouses, then distributes it through a network of over 9,000 nonprofits. FareShare has been working with FoodCloud since 2013 when they developed the Tesco back-of-store solution together.

“We have formed an incredibly impactful solution for the U.K. working together to redistribute food from across the retail, wholesale and food service industry,” says Li Brookman, head of FareShare Go, which provides charities and community groups with direct access to surplus food local supermarkets, wholesalers and restaurants.

Aidan McNamara says having FoodCloud’s technology app on his phone makes it easy for him to know when and what food will be on its way to Rosepark Independent Living in Dublin, where he is the manager. Sixteen residents, ranging in age from 64 to 95, live at the nonprofit facility.

McNamara is also the Sunday chef at Rosepark, where fresh meals are prepared daily, including three-course lunches.

Plate of vegetables
Donated vegetables and some yummy menus by the staff mean nutritious meals for residents at Rosepark residential center in Blackrock, a suburb of Dublin. Photo by Chris Welsch for Microsoft.
Posted on Leave a comment

How IoT, AI and Digital Twins are helping achieve sustainability goals

TBD.

Organizations striving to improve their sustainability can make progress toward those goals by using the Internet of Things (IoT) and AI technology that monitors and analyzes their use of resources and resulting emissions. However, businesses adopting IoT for other reasons often improve their sustainability as a side benefit as well.

Nearly three-fourths of IoT adopters with near-term sustainability goals view IoT solutions as “very important” for reaching those goals. The combination of sensor devices, edge and cloud computing, and AI and machine learning can provide data and analytical insights into how resources are being used, where leaks or faults are occurring and affecting consumption, and where efficiency can be improved. Additionally, Digital Twins technology can create digital models of real-world equipment, buildings, or even smart cities for more detailed insights into how they can be run more sustainably.

Our recently published e-book, “Improving sustainability and smarter resource use with IoT technology” goes further in-depth on the following insights and case studies about IoT and AI solutions and sustainability.

How digital technology can aid sustainability efforts

With greater awareness of climate change and increasing regulation around activities related to emissions and resource usage, sustainability efforts are becoming an urgent priority at many organizations. Microsoft has established transparent goals and tracking of its progress toward carbon-neutral operations and offers a software solution to help others record and report their environmental impact.

We’re also using Microsoft Azure IoT platform tools, to help power solutions in the following sustainability categories:

  • Efficient energy production and distribution: Digital tools are being applied to help electricity production plants—a significant source of air emissions—operate as efficiently and cleanly as possible. Utilities are using IoT solutions to monitor and manage electricity transmission and distribution grids to achieve maximum efficiency, route additional power as demand fluctuates, and detect outages faster. They’re also helping to remotely control renewable energy facilities such as wind farms. Our customer smartPulse offers a solution designed to manage electricity distribution and trading to give utilities the ability to manage imbalances in a financially favorable way.
  • Creating smarter, carbon-neutral buildings: The construction and operation of buildings create 38 percent of total energy-related emissions of carbon dioxide around the world, creating an enormous opportunity for smart building solutions to make a notable impact on the carbon footprint of buildings. IoT technology, Digital Twins modeling, and AI have proven especially useful in managing buildings by automating lighting and climate-control systems, as well as modeling the environmental effects of any design or operational changes. Vasakronan, a global leader in sustainability, has adopted IoT and Azure Digital Twins solutions for its commercial and office properties across Sweden, leading to notable energy cost savings.
  • Improving public infrastructure: Updating infrastructure with IoT technology can make it more sustainable and create other livability improvements, such as increasing safety and reducing excess light pollution. The city of Valencia in Spain saw this when city officials launched a public lighting upgrade. The project included replacing lighting in a national park, where too much light can disrupt wildlife and plants. Light solution provider Schréder and Codit, a cloud integration solutions provider, teamed to upgrade more than 100,000 lighting fixtures and tie in Azure IoT technologies. The city reduced its electricity consumption, cutting greenhouse gas emissions by 80 percent and saving millions of euros annually.
  • Agriculture and food production: Data-gathering and analytical technology informs decisions that lead to better environmental practices involving planting, watering, and pesticide use. Computer Vision can detect when weeds or pests are threatening a growing area. Related technology is contributing to the development of more automation at a time when farm labor shortages are becoming more common. The N.C. State Plant Sciences Initiative, for example, is using faster and more efficient data management to tackle agriculture’s biggest challenges, with the aim of creating better predictive food analytics, increasing food safety, and making more productive crops.

Improving business performance at the same time

Beyond the benefits of reducing consumption of natural resources and reining in emissions, sustainability efforts can generate business value. Forty percent of survey respondents in a recent survey said they expect their company’s sustainability programs to generate modest or significant value in the next five years. That value primarily comes from saving energy costs, cutting back on needed materials, and improving operational efficiency.

Get started with sustainable IoT solutions

By combining sustainability goals with innovative solutions, businesses and people can limit their everyday impact on the planet’s resources. Azure IoT can help transform businesses to be more efficient, manage renewable energy production, reduce waste, or accelerate the development and launch of sustainably oriented apps. A range of end-to-end solutions from our ecosystem of partners addresses sustainability in a variety of ways as well.

Learn more from our e-book, “Improving sustainability and smarter resource use with IoT technology,” or discover how Azure IoT can help your organization adopt IoT, AI, and related technologies.

Learn more

Posted on Leave a comment

New Azure Quantum Resource Estimator empowers you to create algorithms for quantum at scale

Microsoft Azure Quantum Resource Estimator enables quantum innovators to develop and refine algorithms to run on tomorrow’s scaled quantum computers. This new tool is one way Microsoft empowers innovators to have breakthrough impact with quantum at scale.

The quantum computers available today enable interesting experimentation and research but they are unable to accelerate the computations necessary to solve real-world problems. While the industry awaits hardware advances, quantum software innovators are eager to make progress and prepare for a quantum future. Creating algorithms today that will eventually run on tomorrow’s fault-tolerant scaled quantum computers is a daunting task. These innovators are faced with questions such as; What hardware resources are required? How many physical and logical qubits are needed and what type? What’s the runtime? Azure Quantum Resource Estimator was designed specifically to answer these questions. Understanding this data will help innovators create, test, and refine their algorithms and ultimately lead to practical solutions that take advantage of scaled quantum computers when they become available.

Infographic image that has the heading Azure Quantum Resource Estimation with 6 pillars below that are sub-headed application input, compilation tools, QIR, QEC Models, Qubit Models, and Analysis.

The Azure Quantum Resource Estimator started as an internal tool and has been key in shaping the design of Microsoft’s quantum machine. The insights it has provided have informed our approach to engineering a machine capable of the scale required for impact including the machine’s architecture and our decision to use topological qubits. We’re making progress on our machine and recently had a physics breakthrough that was detailed in a preprint to the arXiv. On Thursday, we will take another step forward in transparency by publicly publishing the raw data and analysis in interactive Jupyter notebooks on Azure Quantum. These notebooks provide the exact steps needed to reproduce all the data in our paper. While engineering challenges remain, the physics discovery demonstrated in this data proves out a fundamental building block for our approach to a scaled quantum computer and puts Microsoft on the path to deliver a quantum machine in Azure that will help solve some of the world’s toughest problems.

As we advance our hardware, we are also focused on empowering software innovators to advance their algorithms. The Azure Quantum Resource Estimator performs one of the most challenging problems for researchers developing quantum algorithms. It breaks down the resources required for a quantum algorithm, including the total number of physical qubits, the computational resources required including wall clock time, and the details of the formulas and values used for each estimate. This means algorithm development becomes the focus, with the goal of optimizing performance and decreasing cost. For the first time, it is possible to compare resource estimates for quantum algorithms at scale across different hardware profiles. Start from well-known, pre-defined qubit parameter settings and quantum error correction (QEC) schemes or configure unique settings across a wide range of machine characteristics such as operation error rates, operation speeds, and error correction schemes and thresholds.

“Resource estimation is an increasingly important task for development of quantum computing technology. We are happy we could use Microsoft’s new tool for our research on this topic. It’s easy to use. The integration process was simple, and the results give both a high-level overview helpful for people new to error correction, as well as a detailed breakdown for experts. Resource estimation should be a part of the pipeline for anyone working on fault-tolerant quantum algorithms. Microsoft’s new tool is great for this.”— Michał Stęchły, Tech Lead at Quantum Software Team, Zapata Computing.

The Resource Estimator will help drive the transition from today’s noisy intermediate scale quantum (NISQ) systems to tomorrow’s fault-tolerant quantum computers. Today’s NISQ systems might enable running small numbers of operations in an algorithm successfully, but to get to practical quantum advantage there will need to be trillions and more operations running successfully. This gap will be closed by scaling up to a fault-tolerant quantum machine with built-in Quantum Error Correction. This means each qubit and operation requested in a user’s program will be encoded into some number of physical qubits and operations at the hardware level, and the software stack will perform this conversion automatically.  Now with the Resource Estimator, you can walk through these conversions, estimate the overheads in time and space required to enable implementation of your scaled quantum algorithms on a variety of hardware designs, and use the information to improve your algorithms and applications well before scaled fault-tolerant hardware is available.  In our recent preprint on the arXiv, we show how to use the Resource Estimator to understand the cost of three important quantum algorithms that promise practical quantum advantage.

Resource Estimation paves the way for hardware-software co-design, enabling hardware designers to improve their architectures based on how large-scale algorithms might run on their specific implementation, and in turn, allowing algorithm and software developers to iterate on bringing down the cost of algorithms at scale.

“The Resource Estimator breaks down the resources needed to run a useful algorithm at scale. Putting precise numbers on the actual scale at which quantum computing provides industry-relevant solutions sheds light on the tremendous effort that has yet to be realized. This strengthens our commitment to our roadmap, which is focused on delivering an error-corrected quantum computer using a hardware-efficient approach.”—Jérémie Guillaud, Chief of Theory at Alice&Bob.

Built on the foundation of community-supported quantum intermediate representation (QIR), it is both extensible and portable and can be used with popular quantum SDKs and languages such as Q# and Qiskit. QIR was created in alliance with the Linux Foundation and other partners and is an open source standard that serves as a common interface between many languages and target quantum computation platforms.

Getting started with resource estimation

It is easy to get started and gain your first insights with the tool. The example below shows how to estimate and analyze the physical resources required to run a quantum program on a fault-tolerant quantum computer.

1. Set up your Azure Quantum workspace and get started with Resource Estimation.

Azure Quantum, Azure’s free, cloud-based service, is available to everyone. To get started, just set up an Azure account (check out free Azure accounts for students) and create an Azure Quantum workspace in the Azure Portal.

If you already have an Azure Quantum workspace setup:

a)     Open your workspace in the Azure portal

b)     On the left panel, under Operations, select Providers

c)      Select + Add a provider

d)      Select Microsoft Quantum Computing

e)      Select Learn & Develop and select Save

2. Start with a ready-to-use sample.

To start running quantum programs with no installation required, try our free hosted notebooks experience in the Azure Portal. Our hosted Jupyter Notebooks enable a variety of languages and Quantum SDKs. You will find them in your Azure Quantum workspace (#1). Selecting Notebooks in the portal will take you to the sample gallery, where you will find the Resource Estimation tab (#2). Once there, choose one of the first two samples and then select the “Copy to my notebooks” button (#3) to add the sample to your workspace (#3).

Screenshot of the resource estimation tool workspace UI.

3. Run your first Resource Estimation

After the sample has been copied to My notebooks you can select it from the Workspace menu to load it as a hosted notebook in the Azure Portal. From there, just select Run all from the top of the Jupyter Notebook to execute the program. You will be able to run an entire Resource Estimation job without writing a single line of code!

The results will immediately provide estimates of total physical qubits and runtime for the algorithm provided. For a deeper understanding of the resources consumed by the algorithm, you can trace the source of each result with detailed explanations of formulas. These deeper results can be re-used and shared in your research.

Screenshot of the resource estimation tool results.Screenshot of the resource estimation tool results.

Learn more about Resource Estimation

There are many ways to learn more:

  • Visit our technical documentation for more information on Resource Estimation, including detailed steps to get you started.
  • Login to the Azure Portal, visit your Azure Quantum workspace, and try an advanced sample on topics such as factoring and quantum chemistry.
  • Dive deeper into our research on Resource Estimation at arXiv.org.
Posted on Leave a comment

Do more with less using new Azure HX and HBv4 virtual machines for HPC

This post was co-authored by Jyothi Venkatesh, Senior Product Manager, Azure HPC and Fanny Ou, Technical Program Manager, Azure HPC.

The next generation of purpose-built Azure HPC virtual machines

Today, we are excited to announce two new virtual machines (VMs) that deliver more performance, value-adding innovation, and cost-effectiveness to every Azure HPC customer. The all-new HX-series and HBv4-series VMs are coming soon to the East US region, and thereafter to the South Central US, West US3, and West Europe regions. These new VMs are optimized for a variety of HPC workloads such as computational fluid dynamics (CFD), finite element analysis, frontend and backend electronic design automation (EDA), rendering, molecular dynamics, computational geoscience, weather simulation, AI inference, and financial risk analysis.

Innovative technologies to help HPC customers where it matters most

HX and HBv4 VMs are packed with new and innovative technologies that maximize performance and minimize total HPC spend, including:

  • 4th Gen AMD EPYC™ processors (Preview, Q4 2022).
  • Upcoming AMD EPYC processors, codenamed “Genoa-X,” (with general availability in 1H 2023).
  • 800 GB/s of DDR5 memory bandwidth (STREAM TRIAD).
  • 400 Gb/s NVIDIA Quantum-2 CX7 InfiniBand, the first on the public cloud.
  • 80 Gb/s Azure Accelerated Networking.
  • PCIe Gen4 NVMe SSDs delivering 12 GB/s (read) and 7 GB/s (write) of storage bandwidth.

Below are preliminary benchmarks from the preview of HBv4 and HX series VMs using 4th Gen AMD EPYC processors across several common HPC applications and domains. For comparison, performance information is also included from Azure’s most recent H-series (HBv3-series with Milan-X processors), as well as a 4-year-old HPC-optimized server commonly found in many on-premises datacenters (represented here by Azure HC-series with Skylake processors).

Graph showing performance across benchmarks, relative to a 4-year-old server, HBv3 VMs, and HBv4/HX VMs.

Figure 1: Performance comparison of HBv4/HX-series in Preview to HBv3-series and four-year-old server technology in an HPC-optimized configuration across diverse workloads and scientific domains.

Learn more about the performance of HBv4 and HX-series VMs with 4th Gen EPYC CPUs.

HBv4-series brings performance leaps across a diverse set of HPC workloads

Azure HBv3 VMs with 3rd Gen AMD EPYC™ processors with AMD 3D V-cache™ Technology already deliver impressive levels of HPC performance, scaling MPI workloads up to 27x higher than other clouds, surpassing many of the leading supercomputers in the world, and offering the disruptive value proposition of faster time to solution with lower total cost. Unsurprisingly, the response from customers and partners has been phenomenal. With the introduction of HBv4 series VMs, Azure is raising the bar yet again—this time across an even greater diversity of memory performance-bound, compute-bound, and massively parallel workloads.

VM Size

Physical CPU Cores

RAM (GB)

Memory Bandwidth (STREAM TRIAD) (GB/s)

L3 Cache/VM (MB)

FP64 Compute (TFLOPS)

InfiniBand RDMA Network (Gbps)

Standard_HB176rs_v4

176

688

800

768 MB

6

400

Standard_HB176-144rs_v4

144

688

800

768 MB

6

400

Standard_HB176-96rs_v4

96

688

800

768 MB

6

400

Standard_HB176-48rs_v4

48

688

800

768 MB

6

400

Standard_HB176-24rs_v4

24

688

800

768 MB

6

400

Notes: 1) r” denotes support for remote direct memory access (RDMA) and “s” denotes support for Premium SSD disks. 2) At General Availability, Azure HBv4 VMs will be upgraded to Genao-X processors featuring 3D V-cache. Updated technical specifications for HBv4 will be posted at that time.

HX-series powers next generation silicon design

In Azure, we strive to deliver the best platform for silicon design, both now and far into the future. Azure HBv3 VMs, featuring 3rd Gen AMD EPYC processors with AMD 3D V-cache Technology, are a significant step toward this objective, offering the highest performance and total cost effectiveness in the public cloud for small and medium memory EDA workloads. With the introduction of HX-series VMs, Azure is enhancing its differentiation with a VM purpose-built for even larger models becoming commonplace among chip designers targeting 3, 4, and 5 nanometer processes.

HX VMs will feature 3x more RAM than any prior H-series VM, up to nearly 60 GB of RAM per core, and constrained cores VM sizes to help silicon design customers maximize ROI of their per-core commercial licensing investments.

VM Size

Physical CPU Cores

RAM (GB)

Memory/Core(GB)

L3 Cache/VM (MB)

Local SSD NVMe (TB)

InfiniBand RDMA Network (Gbps)

Standard_HX176rs

176

1,408

8

768

3.6 TB

400

Standard_HX176-144rs

144

1,408

10

768

3.6 TB

400

Standard_HX176-96rs

96

1,408

15

768

3.6 TB

400

Standard_HX176-48rs

48

1,408

29

768

3.6 TB

400

Standard_HX176-24rs

24

1,408

59

768

3.6 TB

400

Notes: 1) “r” denotes support for remote direct memory access (RDMA) and “s” denotes support for Premium SSD disks. 2) At General Availability, Azure HBv4 VMs will be upgraded to Genoa-X processors featuring 3D V-cache. Updated technical specifications for HBv4 will be posted at that time.

400 Gigabit InfiniBand for supercomputing customers

HBv4 and HX VMs are Azure’s first to leverage 400 Gigabit NVIDIA Quantum-2 InfiniBand. This newest generation of InfiniBand brings greater support for the offload of MPI collectives, enhanced congestion control, and enhanced adaptive routing capabilities. Using the new HBv4 or HX-series VMs and only a standard Azure Virtual Machine Scale Set (VMSS), customers can scale CPU-based MPI workloads beyond 50,000 cores per job.

Continuous improvement for Azure HPC customers

Microsoft and AMD share a vision for a new era of high-performance computing in the cloud: one defined by constant improvements to the critical research and business workloads that matter most to our customers. Azure continues to collaborate with AMD to make this vision a reality by raising the bar on the performance, scalability, and value we deliver with every release of Azure H-series VMs.

Graph showing consistent performance increases across generations of virtual machines: HC series (Skylake), HBv2 series (Rome), HBv3 series (Milan X), HX/HBv4 series (Genoa).

Figure 2: Azure HPC Performance 2019 through 2022.

Learn more about the performance of HBv4 and HX-series VMs with 4th Gen EPYC CPUs.

Customer and partner momentum

Altair orange triangular logomark and business name.

We’re pleased to see Altair® AcuSolve®’s impressive linear scale-up on the HBv3 instances, showing up to 2.5 times speedup. Performance increases 12.83 times with an 8-node (512-core) configuration on 3rd AMD EPYC™ processors, an excellent scale-up value for AcuSolve compared to the previous generation delivering superior price performance. We welcome the addition of the new Azure HBv4 and HX-series virtual machines and look forward to pairing them with Altair software to the benefit of our joint customers.”

—Dr. David Curry, Senior Vice President, CFD and EDEM

AMD business name and abstarct logomark.

“Customers in the HPC industry continue to demand higher performance and optimizations to run their most mission-critical and data-intensive applications. 4th Gen AMD EPYC processors provide breakthrough performance for HPC in the cloud, delivering impressive time to results for customers adopting Azure HX-series and HBv4-series VMs.”

—Lynn Comp, Corporate Vice President, Cloud Business, AMD

ANSYS text logo.

“Ansys electronics, semiconductor, fluids, and structures customers demand more throughput out of their simulation tools to overcome challenges posed by product complexity and project timelines. Microsoft’s HBv3 virtual machines, featuring AMD’s 3rd Gen EPYC processors with 3D V-Cache, have been giving companies a great price/performance crossover point to support these multiphysics simulations on-demand and with very little IT overhead. We look forward to leveraging Azure’s next generation of HPC VMs featuring 4th Gen AMD EPYC processors, the HX and HBv4 series, to enable even greater simulation complexity and speed to help engineers reduce risk and meet time-to-market deadlines.”

—John Lee, Vice President and General Manager, Electronics and Semiconductor, Ansys

Cadece text logo.

“We’ve helped thousands of customers combine the performance and scalability of the cloud, providing ease-of-use and instance access to our powerful computational software, which speeds the delivery of innovative designs. The two new high-performance computing virtual machines powered by the AMD Genoa processor on Microsoft Azure can provide our mutual customers with optimal performance as they tackle the ever-increasing demands of compute and memory capacity for gigascale, advanced-node designs.”

—Mahesh Turaga, Vice President, Cloud Business Development, Cadence

Hexagon Logo and business name with Technology partner underneath.

“Hexagon simulation software powers some of the most advanced engineering in the world. We’re proud to partner with Microsoft, and excited to pair our software with Azure’s new HBv4 virtual machines. During early testing in collaboration with the Azure HPC team, we have seen a generational performance speedups of 400 percent when comparing structural simulations running on HBv3 and HX-series VMs. We look forward to seeing what our joint customers will do with this remarkable combination of software and hardware to advance their research and productivity, now and tomorrow. In the first quarter of 2023, we will be benchmarking heavy industrial CFD computations, leveraging multiple HBv4 virtual machines connected through InfiniBand.”

—Bruce Engelmann, CTO, Hexagon

Rescale cloud logo.

“Microsoft Azure has once again raised the bar for HPC infrastructure platform in the cloud this time with the launch of Azure HBv4 and HX virtual machines based on AMD’s 4th gen EPYC Genoa CPUs. We are expecting a strong customer demand for HBv4 and are excited to offer it to our customers that would like to run CFD, EDA, or other types of HPC workloads in the cloud.

—Mulyanto Poort, Vice President of HPC Engineering at Rescale

Siemens logo.

“Early testing by AMD with Siemens EDA workloads showed 15 percent to 22 percent improvements in runtimes with Microsoft Azure’s new AMD-based virtual machines compared to the previous generation. Semiconductor chip designers face a range of technical challenges that make hitting release dates extremely difficult. The combined innovation of AMD, Microsoft Azure, and Siemens provides a simplified path to schedule predictability through the increased performance possible with the latest offerings.”

—Craig Johnson, Vice President, Siemens, EDA Cloud Solutions

Synopsys text logo.

“Customer adoption of the cloud for chip development is accelerating, driven by complexity and time-to-market advantages. The close collaboration between Synopsys and Microsoft brings together EDA and optimized compute to enable customers to scale under the Synopsys FlexEDA pay-per-use model. Verification represents a significant EDA workload in today’s complex SoCs and with the release of AMD’s next-generation EPYC processor available on Microsoft Azure, customers can take advantage of the optimized cache utilization and NUMA-aware memory layout techniques to achieve up to 2x verification throughput over previous generations.”

—Sandeep Mehndiratta, Vice President of Cloud at Synopsys

Learn more

#AzureHPCAI

Posted on Leave a comment

Now generally available, Azure Payment HSM secures digital payment systems in the cloud

We are very excited to announce the general availability of Azure Payment HSM, a BareMetal Infrastructure as a service (IaaS) that enables customers to have native access to payment HSM in the Azure cloud. With Azure Payment HSM, customers can seamlessly migrate PCI workloads to Azure and meet the most stringent security, audit compliance, low latency, and high-performance requirements needed by the Payment Card Industry (PCI).

Azure Payment HSM service empowers service providers and financial institutions to accelerate their payment system’s digital transformation strategy and adopt the public cloud.

ACI logo “Payment HSM support in the public cloud is one of the most significant hurdles to overcome in moving payment systems to the public cloud.  While there are many different solutions, none can meet the stringent requirements required for a payment system. Microsoft, working with Thales, stepped up to provide a payment HSM solution that could meet the modernization ambitions of ACI Worldwide’s technology platform. It has been a pleasure working with both teams to bring this solution to reality.”

—Timothy White, Chief Architect, Retail Payments and Cloud

Service overview

Azure Payment HSM solution is delivered using Thales payShield 10K Payment HSM, which offers single-tenant HSMs and full remote management capabilities. The service is designed to enable total customer control with strict role and data separation between Microsoft and the customer. HSMs are provisioned and connected directly to the customer’s virtual network, and the HSMs are under the customer’s sole administration control. Once allocated, Microsoft’s administrative access is limited to “Operator” mode and full responsibility for configuration and maintenance of the HSM and software falls upon the customer. When the HSM is no longer required and the device is returned to Microsoft, customer data is erased to ensure  privacy and security. The solution comes with Thales payShield premium package license and enhanced support Plan, with a direct relationship between the customer and Thales.

HSM provisioning service will allocate HSM device to a customer’s virtual network, customer can fully access and manage HSM remotely with Thales payShield Manager and TMD.

Figure 1: After HSM is provisioned, HSM device is connected directly to a customer’s virtual network with full remote HSM management capabilities through Thales payShield Manager and TMD.

The customer can quickly add more HSM capacity on demand and subscribe to the highest performance level (up to 2500 CPS) for mission-critical payment applications with low latency. The customer can upgrade, or downgrade HSM performance level based on business needs without interruption of HSM production usage. HSMs can be easily provisioned as a pair of devices and configured for high availability.

Azure remains committed to helping customers achieve compliance with the Payment Card Industry’s leading compliance certifications. Azure Payment HSM is certified across stringent security and compliance requirements established by the PCI Security Standards Council (PCI SSC) including PCI DSS, PCI 3DS, and PCI PIN. Thales payShield 10K HSMs are certified to FIPS 140-2 Level 3 and PCI HSM v3. Azure Payment HSM customers can significantly reduce their compliance time, efforts, and cost by leveraging the shared responsibility matrix from Azure’s PCI Attestation of Compliance (AOC).

Typical use cases

Financial institutions and service providers in the payment ecosystem including issuers, service providers, acquirers, processors, and payment networks will benefit from Azure Payment HSM. Azure Payment HSM enables a wide range of use cases, such as payment processing, which allows card and mobile payment authorization and 3D-Secure authentication; payment credential issuing for cards, wearables, and connected devices; securing keys and authentication data and sensitive data protection for point-to-point encryption, security tokenization, and EMV payment tokenization.

Get started

Azure Payment HSM is available at launch in the following regions: East US, West US, South Central US, Central US, North Europe, and West Europe

As Azure Payment HSM is a specialized service, customers should ask their Microsoft account manager and CSA to send the request via email.

Learn more about Azure Payment HSM

To download PCI certification reports and shared responsibility matrices:

Posted on Leave a comment

Policy Analytics for Azure Firewall now in preview

This blog was co-authored by Gopikrishna Kannan, Principal Program Manager, Azure Networking.

Network security policies are constantly evolving to keep pace with the demands of workloads. With the acceleration of workloads to the cloud, network security policies—Azure Firewall policies in particular—are frequently changing and often updated multiple times in a week (in many cases several times in a day). Over time, the Azure Firewall network and application rules grow and can become suboptimal, impacting the firewall performance and security. For example, high volume and frequently hit rules can be unintentionally prioritized lower. In some cases, applications are hosted in a network that has been migrated to a different network. However, the firewall rules referencing older networks have not been deleted.

Optimizing Firewall rules is a challenging task for any IT team. Especially for large, geographically dispersed organizations, optimizing Azure Firewall policy can be manual, complex, and involve multiple teams across the world. Updates are risky and can potentially impact a critical production workload causing serious downtime. Well, not anymore!

Policy Analytics has been developed to help IT teams manage Azure Firewall rules over time. It provides critical insights and recommendations for optimizing Azure Firewall rules with a goal of strengthening your security posture. We are now excited to share that Policy Analytics for Azure Firewall is now in preview.

Optimize Azure Firewall rules with Policy Analytics

Policy Analytics helps IT teams address these challenges by providing visibility into traffic flowing through the Azure Firewall. Key capabilities available in the Azure Portal include:

  • Firewall flow logs: Displays all traffic flowing through the Azure Firewall alongside hit rate and network and application rule match. This view helps identify top flows across all rules. You can filter flows matching specific sources, destinations, ports, and protocols.
  • Rule analytics: Displays traffic flows mapped to destination network address translation (DNAT), network, and application rules. This provides enhanced visibility of all the flows matching a rule over time. You can analyze rules across both parent and child policies.
  • Policy insight panel: Aggregates policy insights and highlights policy recommendations to optimize your Azure Firewall policies.
  • Single-rule analysis: The single-rule analysis experience analyzes traffic flows matching the selected rule and recommends optimizations based on those observed traffic flows.

Deep dive into single-rule analysis

Let’s investigate single-rule analysis. Here we select a rule of interest to analyze the matching flows and optimize thereof.

Users can analyze Firewall rules with a few easy clicks.

Graphic showing Policy Analytics product experience. The graphic highlights the experience when clicking on the Single-rule analysis tab and when selecting a single rule to analyze.

Figure 1: Start by selecting Single-rule analysis.

With Policy Analytics, you can perform rule analysis by picking the rule of interest. You can pick a rule to optimize. For instance, you may want to analyze rules with a wide range of open ports or a large number of sources and destinations.

Graphic showing Policy Analytics product experience. The graphic highlights the experience when clicking when selecting a single rule to analyze and the information you are able to see for each policy.

Figure 2: Select a rule and Run analysis.

Policy Analytics surfaces the recommendations based on the actual traffic flows. You can review and apply the recommendations, including deleting rules which don’t match any traffic or prioritizing them lower. Alternatively, you can lock down the rules to specific ports matching traffic.

Graphic showing Policy Analytics product experience. The graphic highlights the experience when you receive and apply recommendations on your policy based on actual traffic flow.

Figure 3: Review the results and Apply selected changes.

Pricing

While in preview, enabling Policy Analytics on a Firewall Policy associated with a single firewall is billed per policy as described on the Azure Firewall Manager pricing page. Enabling Policy Analytics on a Firewall Policy associated with more than one firewall is offered at no additional cost.

Next steps

Policy Analytics for Azure Firewall simplifies firewall policy management by providing insights and a centralized view to help IT teams have better and consistent control of Azure Firewall. To learn more about Policy Analytics, see the following resources: