post

On World Water Day, Microsoft is delivering new approaches to ensure we leave no one behind

Today is World Water Day, and this year the theme is “Leaving no one behind.” This is a phrase oft-invoked, but it is particularly important when it comes to water because we are currently leaving 900 million people – much of the world’s population – behind when it comes to safe drinking water, and we’re trending in the wrong direction.

The UN predicts that by 2030, the world may face a 40 percent shortfall in available water. The causes? Climate change is making an already precious resource even more scarce, as rainfall becomes increasingly erratic with temperature changes. Demand is spiking, as the global population grows and consumes more water for farming, industry and personal consumption.

It is a daunting challenge, but a solvable one. It will require far greater understanding of the current state of water on the planet – the location, quantity and quality of freshwater reserves – and how (and how much) is currently being used and by whom. Then, we can use this information to drive efficiencies in delivery and consumption, incentivize behavior change on a local and global level and drive even greater innovation.

Water everywhere and not a drop to drink
Solving the water challenge begins with understanding where the most challenged areas are. Organizations like the World Resources Institute (WRI) and The Nature Conservancy are doing a great deal of work on this issue. The Nature Conservancy’s Protecting Water Atlas aims to drive better decision-making by showing the benefits of investments in water. WRI’s Aqueduct Water Risk Atlas shows both current and future conditions of where water-related risks are most severe, helping decision-makers understand and plan for potential changes in water availability due to climate change and economic development. Microsoft uses the WRI tool in both our global real estate portfolio planning and management and our climate resilience assessments, and supports The Nature Conservancy’s coastal resilience toolkit through AI for Earth and Azure credits.

It’s not just measuring risk – it’s about managing it through proactive approaches. This includes effective conservation measures. Water leakage is one area where improvements could make a big difference. In England and Wales alone, nonprofit organization Discover Water estimates that 3,183 million liters of water are leaked each day. That’s equivalent to filling 1,273 Olympic swimming pools per day! This isn’t a U.K. problem, it’s a global problem. The World Bank estimates that on average, 25 to 30 percent of a utility’s water is lost in the network, and in developing countries as much as 45 million cubic meters are said to be lost daily through leaks.

This prompted Powel, a European software solutions provider, to work with Microsoft to create an Internet of Things solution called SmartWater that can provide the ability to discover and react to these leakages early. The solution monitors water flow into a distribution system and in near real time, with the help of machine learning, detects anomalies so action can be taken.

Beyond conservation, some organizations are looking at water replenishment efforts motivated by the data. Microsoft is one of them. Through our early-stage initiative, we are identifying water-stressed areas around the world, the best partners in that region to collaborate with, and are making investments in projects that improve water conditions and alleviate water stress in those areas. That’s why one fall day last year, some Microsoft employees built beaver dams in Washington state. These beaver dam analogs offer water availability and quality benefits and represented our first public investment in this area.

We’re also engaging in collaborative platforms, such as the UN CEO Water Mandate, to identify opportunities to advance collective action to align and amplify the commitments of individual companies to contribute to achieving Sustainable Development Goal 6.

Beyond conservation to transformation
Water is one of the four key issue areas of our AI for Earth program, a $50 million, 5-year commitment to providing AI tools to researchers around the globe working on environmental challenges. More than 230 grantees are doing work, enabled by AI, in more than 60 countries on challenges related to water, as well as agriculture, biodiversity and climate change. Ultimately, these issues are interrelated – it’s difficult to solve any of the challenges in these areas without addressing others. Here are three grantees that are working across those disciplines, with AI, to drive new insights and behaviors, from algae blooms to precision agriculture with an eye toward water availability to predicting events like floods when we have too much water:

Providing early warning of harmful algal bloom outbreaks
For many years, the waters of Lake Atitlán in the Guatemalan highlands were pristine, a landmark for natural beauty and biodiversity. However, in 2009 the lake experienced the first of several harmful algal blooms (HABs) – out-of-control colonies of algae that suck oxygen out of the water and make it potentially toxic to life.

Africa Flores describes that first HAB in Lake Atitlán as a wake-up call for action to preserve its precious biodiversity. But Guatemala has limited resources and means to investigate and better understand the causes and help predict and prevent future outbreaks. Thankfully, Flores’ work as a research scientist at the Earth System Science Center at the University of Alabama in Huntsville allows her to focus on this very issue.

Flores and her team will conduct deep analyses on image datasets from different satellites. Machine learning will help them to identify the variables that could predict future algal blooms. Knowledge on what those triggers are can turn into precise preventative action, not just in the lake in Flores’ home country but also in other freshwater bodies with similar conditions in Central and South America.

Improving agricultural water use efficiency with AI
As climate change disrupts weather patterns, rainfall is becoming more unreliable. Farmers are drilling more wells for center-pivot irrigation – a method where crops are watered with sprinklers rotating around a central source. However, this approach can lead to lowered or even drained water tables, salination of coastal aquifers, land subsidence and disruption to ecosystems.

Kelly Caylor, a professor of ecohydrology at the University of California, Santa Barbara, is investigating how much water is being used from these groundwater sources. He is developing a web tool that uses machine learning to identify active crop fields in satellite imagery and geospatial analysis tools to monitor how crops change over time. Knowing where the crops are growing and for how long, and then correlating that to weather data, the system can also infer how much water is being used.

With a better understanding of how much groundwater is used by center-pivot irrigation will come opportunities to develop more optimal and efficient practices, as well as policies for better water stewardship. With the online map and tools, farmers, water resource managers, policymakers and the public will be better able to make agriculture more land and water efficient.

Improving long-range forecasts for flood prediction
Climate change disruption to weather patterns sometimes means drought and sometimes means flooding. Already, a United Nations study has shown an increase in weather-related disasters since 1995, with floods accounting for nearly half. Climate change projections suggest that the frequency and severity of floods will increase in years to come as temperatures rise. And flooding threatens the most people in some of the countries least able to predict or prevent the devastation.

To make these regions more resilient, long-range forecasts for precipitation and flooding risk must be improved. Existing weather forecast models have been shown to routinely underestimate precipitation even the day before, and neither amount nor location can be predicted accurately five days in advance. But professors Wei Ding and Shafiqul Islam are leading a small team to develop machine learning models with the goal of accurately predicting floods up to 15 days in advance.

The team’s approach is to process enormous historical weather data sets and look for patterns that precede flooding. With this analysis, they plan to build a new forecasting model that can give early flood warnings to vulnerable populations around the world. More accurate and timely predictions will help reduce the overall impact of these disasters.

Transformations don’t have to be fueled by AI to make a difference. Microsoft is also making it easy for you to get engaged – just watch some Minecraft! Our team has been hard at work at the “Village and Pillage” update, which includes a redesign of water wells. This weekend, we’re supporting the nonprofit Charity: Water effort to bring clean water to people around the world through their “Weekend for Water.” All you have to do is tune in to livestreams of Minecraft players – you can make donations, the streamers will be giving away Minecoins, and the money raised will help dig wells to provide clean water around the globe.

So this World Water Day, I encourage you to take action, and encourage your friends, neighbors, employers and government officials to take action as well. It will take all of us to ensure no one is left behind, and that work should begin today.

Tags: ,

post

Wales to become one of the first countries in the world to give schools free access to Microsoft 365

Wales is helping nearly half a million young people improve their digital skills by becoming one of the first countries in the world to give all local authority schools free access to Microsoft 365.

The Welsh Government will pay for all 1,521 “maintained” schools to have access to programs such as Word, Excel and PowerPoint, in a bid to boost the use of technology among pupils and reduce costs for families and headteachers.

As part of the £1.2 million investment, which is expected to benefit around 467,000 young people, all teachers and students will be able to download and install the latest version of Office 365 ProPlus on up to five personal devices. Pupils can then collaborate and continue learning at home using the same programs as they do in the classroom.

Kirsty Williams, Minister for Education, said: “I’m proud to say we’re one of the first countries in the world to take this progressive approach to providing schools with this software. Through our curriculum reforms we want all learners to have relevant high-level digital, literacy and numeracy skills, and access to these applications is an important step towards achieving that.

“This will reduce the burden on schools to pay for their own licensing fees and also ensure all our schools have the same level of access to the digital tools they need to progress these skills in our learners. This is vital as we aim to reduce the attainment gap and increase standards in our schools.”



The deal also includes Minecraft: Education Edition, which contains Code Builder. This version of the popular block-building game will allow teachers and students to learn coding using Tynker and Microsoft MakeCode, and supports the Welsh Government’s Cracking the Code plan to encourage coding in every part of the country.

Users will be able to securely access Office 365 ProPlus in Welsh or English via the government’s Hwb digital learning platform, which is available to all maintained schools and actively used by more than 85%.

Online versions of the software will continue to be available through Hwb for use in public spaces such as libraries.

Cindy Rose, Chief Executive of Microsoft UK, said: “The introduction of Office 365 will be transformational for both teachers and pupils, empowering them to collaborate more effectively, saving time and generating better learning outcomes. Equally, Office 365 provides students with valuable skills to help them obtain employment following school.

“Additionally, the accessibility tools built into Office 365 will mean all students gain the confidence to contribute to learning discussions. Similarly, with Minecraft: Education Edition, students will develop computational thinking skills in an immersive and classroom-friendly format that sparks creativity and innovation. This agreement ensures Wales retains its position as a world leader in digital education delivery.”



The Office deal comes just days after the Welsh Government announced that Flipgrid, the social and personal learning program used by millions of teachers in more than 180 countries, will also be available to schools in Wales via Hwb.

Flipgrid lets teachers see and hear from every pupil in their class by posting questions to online discussion boards, called grids. Students answer by creating short videos but can practice and perfect their response before posting, helping to increase confidence and improve public speaking.

The move to make Office 365 ProPlus and Flipgrid available to schools will support the Welsh Government’s Digital Competence Framework (DCF), which was launched in 2016 and aims to help people develop the skills that will help them thrive in an increasingly digital world.

According to the US Department of Labor, 65% of today’s students will end up working in jobs that don’t exist yet, and more than 500,000 highly-skilled workers will be needed to fill digital roles by 2022 – three times the number of UK computer science students who graduated in the past 10 years. Just 5% of computer scientists are female, while people returning to work and those from black and minority ethnic backgrounds are also vastly underrepresented within the sector.

The DCF aims to tackle this issue by setting out the digital skills to be attained by students aged between three and 16, including communication, collaboration, creativity, data, problem solving and online behaviour.

Tags: , , , , ,

post

‘ID@Xbox Game Pass’ debuts March 26

We are excited to announce ID@Xbox Game Pass, a stream highlighting great indie games coming soon to Xbox Game Pass, premiering March 26 at 9:00 AM PDT. In this show, you can expect to learn more about some of the hottest ID@Xbox titles coming to Xbox Game Pass with new reveals, gameplay highlights, and conversations with the developers. Fans will be able to check out our first ever episode here.

In our debut, we’ll dive into games previously shown at E3 and X018. Expect hits such as Afterparty, Void Bastards, and Supermarket Shriek to be highlighted. In addition to new game announcements, we’ll also share a visit to Night School Studio, the creative team behind the hit game Oxenfree, for a behind-the-scenes look at their upcoming game Afterparty. Extra made up bonus internet points if you tweet to us about your advanced frisbee college courses.

Hope you join us for our debut episode on March 26 at 9:00 AM PDT!

And to stay up to date with the latest news and announcements, be sure to follow us on Instagram and Twitter. With our Xbox Game Pass mobile app, you can also get notifications as new games are added and remote install games to your home console as soon as they are available!

post

Educators: Bring Jane Goodall to your classroom with April 2 and 9 Skype events

Hello, changemakers and compassionate citizens!

Dr. Jane Goodall is one of the biggest changemakers in history, making huge discoveries about chimpanzees and dedicating her life to making things better all over the world. Despite growing threats to wildlife, people and ecosystems, she still has plenty of hope for the future. Why? Because she believes in the power of young people motivated to make a difference. That’s why she created the Roots & Shoots program of the Jane Goodall Institute!

So, what is it, and how did it start? It all began in 1991 on Jane Goodall’s front porch in Tanzania, when a small group of students told Jane they felt powerless thinking about the problems all around them. This is something she had heard from people everywhere she went, but what could she do? All at once Jane realized the solution was right in front of her. She encouraged the group to use their voices and ideas to address the issues they saw, the things that mattered most to them. Roots & Shoots was born.

In the 28 years since it started, Roots & Shoots has expanded its reach to over 80 countries around the globe, impacting the lives of countless young people. This very special program has been providing young people with the skill-building and tools that they need to make a positive impact in their communities.

Roots & Shoots is all about finding solutions by looking around and getting involved to address issues facing people, animals and the environment. This holistic approach, using the R&S 4-Step Formula, makes Roots & Shoots an “easy-to-adopt” framework creating a generation dedicated to building a more sustainable planet. The program operates with a firm understanding that young people aren’t waiting until tomorrow to take action, they’re facing problems today and are likewise addressing issues facing the planet head on, right now.

“More and more young people around the world are taking action now, today, on projects they are truly passionate about. I am very excited to have the opportunity to connect with classrooms around the globe for this Skype in the Classroom broadcast and to discuss how we can improve the world for people, animals and the environment we share.”

– Dr. Jane Goodall, DBE, Founder of the Jane Goodall Institute, UN Messenger of Peace

You may be wondering how young people can get involved with the program. The GREAT news is that there are many ways to participate as a part of Roots & Shoots! The program has a diverse network of change-makers and allows individuals to get involved at ANY LEVEL they feel comfortable with. No action is too small! During the Skype in the Classroom broadcast, you’ll have the opportunity to explore the actions you and the young people you mentor can take today and gain the skills to continue building service-learning campaigns in your own communities.

Roots & Shoots provides resources for both youth activists and adult mentors and empowers young people to become the type of leaders who will make compassionate decisions to make the world a better place. Through the program, youth activists lead local change through service while developing the skills and traits of compassionate citizens.

Whether it’s natural disasters, homelessness, pollution or even climate change, being a part of Roots & Shoots means choosing what kind of difference you want to make. From that front porch, a new generation has emerged to create a global movement. Young people in Roots & Shoots are not only the future, they’re the present, and they’re changing the world today.

An Idea for Educators

Adult mentors interested in assisting youth activists in their journey to make the world a better place have the opportunity to participate in Roots & Shoots FREE online course for educators. Throughout the course, educators unlock the skills necessary for fostering the growth of compassionate citizens. Not only will the educators receive professional development through this course, but they will then be capable of mentoring youth in the process of designing community action campaigns using the Roots & Shoots program model.

Connecting with students through Skype in the Classroom

Roots & Shoots is so delighted for the opportunity to work with Skype in the Classroom to bring an exciting broadcast and live chat experience to your students on April 2nd & 9th. Classrooms around the world will be able to tune in to this Skype in the Classroom broadcast event as we explore together how to make a difference in our own communities! Dr. Jane and the Jane Goodall Institute team will be answering as many questions as they can in the live chat.

We’re looking forward to sharing this experience with so many change-makers!

Ok, I’m in! How can I join the event and prepare my classroom?

Share the plans on your participation and preparations for the event with us @SkypeClassroom, @JaneGoodallInst and @RootsandShoots with #Skype2Learn.

post

Public preview of Windows Virtual Desktop now available

Last September, we announced Windows Virtual Desktop and began a private preview. Since then, we’ve been hard at work developing the ability to scale and deliver a true multi-session Windows 10 and Office 365 ProPlus virtual desktop and app experience on any device.

Today, we move to the next phase and announce the public preview of Microsoft Windows Virtual Desktop. Now, all customers can access this service—the only service that delivers simplified management, a multi-session Windows 10 experience, optimizations for Office 365 ProPlus, and support for Windows Server Remote Desktop Services (RDS) desktops and apps. With Windows Virtual Desktop, you can deploy and scale your Windows desktops and apps on Azure in minutes and enjoy built-in security.

Through our private preview, we had the chance to work closely with customers and partners to help shape this new service. It has been rewarding to see the results so far—a great example being at X5 Music Group, a Warner Music Group company.

“Within the music industry, we have to access, manage, and store large volumes of complex metadata securely. Windows Virtual Desktop is a great way of bringing data-heavy applications into our cloud platform without the need to rewrite the application. Windows Virtual Desktop also provides several additional benefits, such as making it really easy to scale the number of users while minimizing the attack surface of our applications.”
—Klas Broman, CTO and Developer Lead, X5 Music Group

As we start public preview, we’ll continue listening and taking feedback, to ensure we’re meeting your needs as we head toward general availability in the second half of calendar year 2019.

With the end of extended support for Windows 7 coming up in January 2020, we also understand some customers need to continue to support Windows 7 legacy applications as they migrate to Windows 10. To support this need, you’ll soon be able to use Windows Virtual Desktop to virtualize Windows 7 desktops with free Extended Security Updates (ESU) until January 2023. This support provides a comprehensive virtualization solution for Windows 7 alongside your Windows 10 and Windows Server desktops and apps.

Solutions to extend Windows Virtual Desktop

In November 2018, we acquired FSLogix, a next-generation app-provisioning platform that reduces the resources, time, and labor required to support desktop and app virtualization. FSLogix technologies enable faster load times for non-persistent users accessing Outlook or OneDrive. FSLogix technology will support both client and server RDS deployments—helping on-premises customers more easily migrate to Windows Virtual Desktop and providing a great solution for customers in hybrid scenarios.

Windows Virtual Desktop will also be extended and enriched by leading partners in the following ways:

  • Citrix can extend Windows Virtual Desktop capabilities with their Citrix Cloud services.
  • Through our partnership with Samsung, Windows Virtual Desktop will provide highly mobile Firstline Workers access to a full Windows 10 and Office 365 ProPlus experience with Samsung DeX.
  • Software and service providers will extend Windows Virtual Desktop to offer targeted solutions in the Azure marketplace.
  • Microsoft Cloud Solution Providers (CSPs) will deliver end-to-end desktop-as-a-service (DaaS) offerings and value-added services to their customers.

Access to Windows Virtual Desktop

To deploy and manage your virtualization environment, you just need to set up an Azure subscription. You can choose the type of virtual machines (VMs) and storage you want to suit your environment. You can optimize costs by taking advantage of Reserved Instances (up to 72 percent discount) and by using multi-session Windows 10.

For users accessing the Windows 10 and Windows 7 desktops and apps, there’s no additional cost if you’re an existing Microsoft 365 F1/E3/E5, Windows 10 Enterprise E3/E5, or Windows VDA customer. For Windows Server desktops and apps, there’s no additional cost if you’re an existing Microsoft RDS Client Access License (CAL) customer.

Get started with the public preview of Windows Virtual Desktop

Windows Virtual Desktop is comprised of the Windows desktops and apps you’re delivering to users and the management solution hosted as a service on Azure by Microsoft. During public preview, desktops and apps can be deployed on VMs in any Azure region, and the management solution and data for these VMs will reside in the United States (US East 2 region). This may result in data transfer to the United States while you test the service in public preview.

We’ll start to scale out the management solution and data localization to all Azure regions starting at general availability. For more information on getting started, considerations for optimal deployment guidance, and to provide feedback as you preview the service, please visit the Windows Virtual Desktop preview page.

post

New Bing Maps traffic coloring makes it easier to choose the fastest route

There is an old saying that you don’t know where you are going until you get there. With Bing Maps new route coloring feature, you will know right away where the delays will be along your selected route so you can change your route, your plans or your destination based on the route ahead!

For example, if you are leaving Redmond Town Center for Westlake Center in Seattle, you can see the delays on WA-520 W and I-90 W before you decide which route to take. Also, with our new route labels showing the travel mode, distance and time of each route, you can easily compare and toggle between the different routes quickly on the map.

Bing Maps Traffic Coloring

Bing Maps Traffic Coloring

While blue means no traffic delays, the orange and red colors highlight moderate to heavy traffic delays on the route. These are calculated based on a combination of current traffic updates and predictions from historic data depending on the length of the route.

Traffic coloring not only helps you select the best route for your trip, but can also be very useful when there are major traffic delays due to inclement weather, big events, accidents, or road construction nearby. For example, if there is an MLB or NFL game in town, you can avoid the most impacted roads near the event and choose an option that offers the least delays.

In addition, if you need to take a ferry as part of your route, Bing Maps visualizes the ferry segments using dashes to differentiate that part of the trip. The image below illustrates the route between Bellevue and Bainbridge Island in Washington State. Bing Maps highlights the ferry segment between Seattle and Bainbridge with a dashed line.

Bing Maps Traffic Coloring Ferry Route

The Bing Maps Routing and Traffic Team is constantly working to make navigation and route planning easier for our users. To try out the traffic coloring feature, go to https://www.bing.com/maps.

 – Bing Maps Team

post

‘Vigor’ free weekend begins for Xbox Live Gold members

Xbox Live Gold members can play Vigor for free on Xbox One starting today until March 24, which makes this the perfect time to explore the beautiful landscapes of post-war Norway, scavenge for loot amongst other Outlanders in tense map-based encounters and start rebuilding your safe haven amidst a fallen civilization.

This free weekend hits shortly after the release of the 0.8 “Signal” update, which brought new content and game events, a player lobby and gesture system, as well as implementing numerous improvements and an enhanced gameplay experience overall. Previous updates 0.7 and 0.6 have already added optional objectives, randomized events, duos mode, improved weapon play (with over 17 additional weapons added) and numerous bug fixes and optimizations.

Although Vigor will release onto Xbox One as a free-to-play title this year, players can opt to continue playing – and supporting – the game after the Xbox Live free play weekend has ended by purchasing the game during its Xbox Game Preview period, which will also unlock the Founder’s Pack. This not only gives you immediate full access to the game but also rewards you with exclusive in-game items only obtainable during the Xbox Game Preview period of development. Please note that any progress made before 1.0 will be reset once the game fully releases – more info soon on vigorgame.com.

If the idea of a tense shoot n’ loot experience set against the picturesque backdrop of 90s Norway piques your interest, then be sure to head to the Gold Member of the Microsoft Store and download Vigor for free starting today.

See you in the game, outlanders!

post

From Microsoft Ignite in Amsterdam: New Microsoft 365 enhancements to reduce costs, increase security and boost productivity

Today, we’re announcing several new Microsoft 365 enhancements to help IT reduce costs, increase security, and boost employee productivity.

Here’s a quick summary:

  • Windows Virtual Desktop is now in public preview, providing the best virtualized Microsoft 365 experience across devices.
  • Microsoft Defender Advanced Threat Protection (ATP) now supports Mac, extending Microsoft 365 advanced endpoint security across platforms.
  • The new Microsoft Threat and Vulnerability Management (TVM) capability in Microsoft Defender ATP will help detect, assess, and prioritize threats across endpoints.
  • Office 365 ProPlus will now include the Microsoft Teams app, enabling a new way to work.
  • We’re reducing the time it takes to apply Windows 10 feature updates, making it easier to deploy and service Windows 10.
  • We’re enhancing Configuration Manager and Microsoft Intune with new insights and deployment options to make it easier to manage your devices across platforms.
  • Microsoft 365 admin center is now generally available.

Virtualize Windows 10 and Office 365 on Azure with Windows Virtual Desktop—now in public preview

Today, we’re happy to announce the public preview of Windows Virtual Desktop. Windows Virtual Desktop is the only service that delivers simplified management, multi-session Windows 10, optimizations for Office 365 ProPlus, and support for Remote Desktop Services (RDS) environments in a shared public cloud. With Windows Virtual Desktop, you can deploy and scale your Windows desktops and apps on Azure in minutes, with built-in security and compliance.

For more information about Windows Virtual Desktop or how to get started with the public preview, read the full announcement and watch the new Mechanics video.

Address risks and protect more of your Microsoft 365 devices and endpoints with Microsoft Defender ATP—now in public preview

New today, we’re extending support for our Microsoft Defender threat protection platform to Mac. And because we’re extending support beyond the Windows ecosystem, we’re renaming the platform from Windows Defender Advanced Threat Protection (ATP) to Microsoft Defender Advanced Threat Protection (ATP). Starting today, Microsoft Defender ATP customers can sign up for a public preview. For more information, visit our Tech Community blog.

We’re also announcing Threat and Vulnerability Management (TVM), a new capability within Microsoft Defender ATP, designed to empower security teams to discover, prioritize, and remediate known vulnerabilities and misconfigurations exploited by threat actors. Using TVM, customers can evaluate the risk-level of threats and vulnerabilities and prioritize remediation based on signals from Microsoft Defender ATP. TVM will be available as a public preview for Microsoft Defender ATP customers within the next month. Learn more about it in our Tech Community blog.

Today’s security announcements are an important milestone in our Microsoft 365 endpoint security journey. For more details, check out Rob Lefferts’s post on the Microsoft Security blog.

Enable a new way to work with Office 365 ProPlus and Teams—starting in March

Starting in March, new installs of Office 365 ProPlus will include the Teams app by default. As a “hub for teamwork,” Teams combines chat, voice, video, files, meetings, and calls into a single, integrated experience.

In addition, the default installation for ProPlus will now be 64 bit, enabling better reliability and more effective use of newer PC hardware. If you have earlier 32-bit installs, a soon-to-be-released in-place upgrade from 32-bit to 64-bit Office 365 ProPlus will allow you to upgrade the Office apps without uninstalling and reinstalling.

Reducing the time required for Windows 10 feature updates—starting with version 1709

We made important changes to the Windows update process. Starting with Windows 10 version 1709, devices are updating up to 63 percent faster. Additionally, with the release of Windows 10 version 1703, we’ve seen a 20 percent reduction in operating system and driver stability issues.

Simplify and modernize management with Configuration Manager and Intune

Configuration Manager current branch offers CMPivot for real-time queries and updates to management insights that help with co-management readiness. What’s more, you can now take advantage of new deployment options, including phased deployments and configuring known-folder mapping to OneDrive.

Mobile Device Management (MDM) Security Baselines are now in preview in Intune. These baselines are a group of Microsoft-recommended configuration settings that increase your security posture and operational efficiency and reduce costs. We’re also announcing several new Intune capabilities for unified endpoint management across devices and platforms.

Check out What’s new in Microsoft Intune and Configuration Manager for more detailed information on our broad unified endpoint management investments.

Manage Microsoft 365 with a new admin center—rolling out now

We’re also announcing the that the new Microsoft 365 admin center, previously in preview, will become the default experience for all Microsoft 365 and Office 365 admins. Admin.microsoft.com is your single entry point for managing your Microsoft 365 services and includes new features like guided setup experiences, improved groups management, Multi-Factor Authentication (MFA) for admins, and more.

For more information on this new release, check out the detailed post on the Microsoft 365 Tech Community blog.

More at Microsoft Ignite: The Tour in Amsterdam

We’re sharing more on each of these announcements this week at Microsoft Ignite: The Tour in Amsterdam. I’ll be there to co-present a session with Jeremy Chapman on “Simplifying IT with Windows 10 and Office 365 ProPlus.” You’ll have a chance to learn more from many of my colleagues in the teamwork, modern desktop, and security sessions. I hope to see you there!

Editor’s note 3/21/2019:
Blog post was updated to correct information regarding Configuration Manager current branch.

post

With a ‘hello,’ Microsoft and UW demonstrate first fully automated DNA data storage

Researchers from Microsoft and the University of Washington have demonstrated the first fully automated system to store and retrieve data in manufactured DNA — a key step in moving the technology out of the research lab and into commercial datacenters.

In a simple proof-of-concept test, the team successfully encoded the word “hello” in snippets of fabricated DNA and converted it back to digital data using a fully automated end-to-end system, which is described in a new paper published March 21 in Nature Scientific Reports.

DNA can store digital information in a space that is orders of magnitude smaller than datacenters use today. It’s one promising solution for storing the exploding amount of data the world generates each day, from business records and cute animal videos to medical scans and images from outer space.

Microsoft is exploring ways to close a looming gap between the amount of data we are producing that needs to be preserved and our capacity to store it. That includes developing algorithms and molecular computing technologies to encode and retrieve data in fabricated DNA, which could fit all the information currently stored in a warehouse-sized datacenter into a space roughly the size of a few board game dice.

“Our ultimate goal is to put a system into production that, to the end user, looks very much like any other cloud storage service — bits are sent to a datacenter and stored there and then they just appear when the customer wants them,” said Microsoft principal researcher Karin Strauss. “To do that, we needed to prove that this is practical from an automation perspective.”

Information is stored in synthetic DNA molecules created in a lab, not DNA from humans or other living things, and can be encrypted before it is sent to the system. While sophisticated machines such as synthesizers and sequencers already perform key parts of the process, many of the intermediate steps until now have required manual labor in the research lab. But that wouldn’t be viable in a commercial setting, said Chris Takahashi, senior research scientist at the UW’s Paul G. Allen School of Computer Science & Engineering.

“You can’t have a bunch of people running around a datacenter with pipettes — it’s too prone to human error, it’s too costly and the footprint would be too large,” Takahashi said.

YouTube Video

For the technique to make sense as a commercial storage solution, costs need to decrease for both synthesizing DNA — essentially custom building strands with meaningful sequences — and the sequencing process that extracts the stored information. Trends are moving rapidly in that direction, researchers say.

Automation is another key piece of that puzzle, as it would enable storage at a commercial scale and make it more affordable, Microsoft researchers say.

Under the right conditions, DNA can last much longer than current archival storage technologies that degrade in a matter of decades. Some DNA has managed to persist in less than ideal storage conditions for tens of thousands of years in mammoth tusks and bones of early humans, and it should have relevancy as long as people are alive.

The automated DNA data storage system uses software developed by the Microsoft and UW team that converts the ones and zeros of digital data into the As, Ts, Cs and Gs that make up the building blocks of DNA. Then it uses inexpensive, largely off-the-shelf lab equipment to flow the necessary liquids and chemicals into a synthesizer that builds manufactured snippets of DNA and to push them into a storage vessel.

When the system needs to retrieve the information, it adds other chemicals to properly prepare the DNA and uses microfluidic pumps to push the liquids into other parts of the system that “read” the DNA sequences and convert it back to information that a computer can understand. The goal of the project was not to prove how fast or inexpensively the system could work, researchers say, but simply to demonstrate that automation is possible.

One immediate benefit of having an automated DNA storage system is that it frees researchers up to probe deeper questions, instead of spending time searching for bottles of reagents or repetitively squeezing drops of liquids into test tubes.

“Having an automated system to do the repetitive work allows those of us working in the lab to take a higher view and begin to assemble new strategies — to essentially innovate much faster,” said Microsoft researcher Bichlien Nguyen.

The team from the Molecular Information Systems Lab has already demonstrated that it can store cat photographs, great literary works, pop videos and archival recordings in DNA, and retrieve those files without errors in a research setting. To date they’ve been able to store 1 gigabyte of data in DNA, besting their previous world record of 200 MB.

To store data in DNA, algorithms convert the 1s and 0s in digital data to ACTG sequences in DNA. Microsoft and University of Washington researchers stored and retrieved the word “hello” using the first fully automated system for DNA storage.

The researchers have also developed techniques to perform meaningful computation — like searching for and retrieving only images that contain an apple or a green bicycle — using the molecules themselves and without having to convert the files back into a digital format.

“We are definitely seeing a new kind of computer system being born here where you are using molecules to store data and electronics for control and processing. Putting them together holds some really interesting possibilities for the future,” said UW Allen School professor Luis Ceze.

Unlike silicon-based computing systems, DNA-based storage and computing systems have to use liquids to move molecules around. But fluids are inherently different than electrons and require entirely new engineering solutions.

The UW team, in collaboration with Microsoft, is also developing a programmable system that automates lab experiments by harnessing the properties of electricity and water to move droplets around on a grid of electrodes. The full stack of software and hardware, nicknamed “Puddle” and “PurpleDrop,” can mix, separate, heat or cool different liquids and run lab protocols.

The goal is to automate lab experiments that are currently being done by hand or by expensive liquid handling robots — but for a fraction of the cost.

Next steps for the MISL team include integrating the simple end-to-end automated system with technologies such as PurpleDrop and those that enable searching with DNA molecules. The researchers specifically designed the automated system to be modular, allowing it to evolve as new technologies emerge for synthesizing, sequencing or working with DNA.

“What’s great about this system is that if we wanted to replace one of the parts with something new or better or faster, we can just plug that in,” Nguyen said. “It gives us a lot of flexibility for the future.”

Top image: Microsoft and University of Washington researchers have successfully encoded and retrieved the word “hello” using this new system that fully automates DNA storage. It’s a key step in moving the technology out of the lab and into commercial datacenters.

Related to DNA storage:

Jennifer Langston writes about Microsoft research and innovation. Follow her on Twitter.

post

Project Triton and the physics of sound with Microsoft Research’s Dr. Nikunj Raghuvanshi

Episode 68, March 20, 2019

If you’ve ever played video games, you know that for the most part, they look a lot better than they sound. That’s largely due to the fact that audible sound waves are much longer – and a lot more crafty – than visual light waves, and therefore, much more difficult to replicate in simulated environments. But Dr. Nikunj Raghuvanshi, a Senior Researcher in the Interactive Media Group at Microsoft Research, is working to change that by bringing the quality of game audio up to speed with the quality of game video. He wants you to hear how sound really travels – in rooms, around corners, behind walls, out doors – and he’s using computational physics to do it.

Today, Dr. Raghuvanshi talks about the unique challenges of simulating realistic sound on a budget (both money and CPU), explains how classic ideas in concert hall acoustics need a fresh take for complex games like Gears of War, reveals the computational secret sauce you need to deliver the right sound at the right time, and tells us about Project Triton, an acoustic system that models how real sound waves behave in 3-D game environments to makes us believe with our ears as well as our eyes.

Related:


Final Transcript

Nikunj Raghuvanshi: In a game scene, you will have multiple rooms, you’ll have caves, you’ll have courtyards, you’ll have all sorts of complex geometry and then people love to blow off roofs and poke holes into geometry all over the place. And within that, now sound is streaming all around the space and it’s making its way around geometry. And the question becomes how do you compute even the direct sound? Even the initial sound’s loudness and direction, which are important? How do you find those? Quickly? Because you are on the clock and you have like 60, 100 sources moving around, and you have to compute all of that very quickly.

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: If you’ve ever played video games, you know that for the most part, they look a lot better than they sound. That’s largely due to the fact that audible sound waves are much longer – and a lot more crafty – than visual light waves, and therefore, much more difficult to replicate in simulated environments. But Dr. Nikunj Raghuvanshi, a Senior Researcher in the Interactive Media Group at Microsoft Research, is working to change that by bringing the quality of game audio up to speed with the quality of game video. He wants you to hear how sound really travels – in rooms, around corners, behind walls, out doors – and he’s using computational physics to do it.

Today, Dr. Raghuvanshi talks about the unique challenges of simulating realistic sound on a budget (both money and CPU), explains how classic ideas in concert hall acoustics need a fresh take for complex games like Gears of War, reveals the computational secret sauce you need to deliver the right sound at the right time, and tells us about Project Triton, an acoustic system that models how real sound waves behave in 3-D game environments to makes us believe with our ears as well as our eyes. That and much more on this episode of the Microsoft Research Podcast.

Host: Nikunj Raghuvanshi, welcome to the podcast.

Nikunj Raghuvanshi: I’m glad to be here!

Host: You are a senior researcher in MSR’s Interactive Media Group, and you situate your research at the intersection of computational acoustics and graphics. Specifically, you call it “fast computational physics for interactive audio/visual applications.”

Nikunj Raghuvanshi: Yep, that’s a mouthful, right?

Host: It is a mouthful. So, unpack that! How would you describe what you do and why you do it? What gets you up in the morning?

Nikunj Raghuvanshi: Yeah, so my passion is physics. I really like the mixture of computers and physics. So, the way I got into this was, many, many years ago, I picked up this book on C++ and it was describing graphics and stuff. And I didn’t understand half of it, and there was a color plate in there. It took me two days to realize that those are not photographs, they were generated by a machine, and I was like, somebody took a photo of a world that doesn’t exist. So, that is what excites me. I was like, this is amazing. This is as close to magic as you can get. And then the idea was I used to build these little simulations and I was like the exciting thing is you just code up these laws of physics into a machine and you see all this behavior emerge out of it. And you didn’t tell the world to do this or that. It’s just basic Newtonian physics. So, that is computational physics. And when you try to do this for games, the challenge is you have to be super-fast. You have 1/60th of a second to render the next frame to produce the next buffer of audio. Right? So, that’s the fast portion. How do you take all these laws and compute the results fast enough that it can happen at 1/60th of a second, repeatedly? So, that’s where the computer science enters the physics part of it. So, that’s the sort of mixture of things where I like to work in.

Host: You’ve said that light and sound, or video and audio, work together to make gaming, augmented reality, virtual reality, believable. Why are the visual components so much more advanced than the audio? Is it because the audio is the poor relation in this equation, or is it that much harder to do?

Nikunj Raghuvanshi: It is kind of both. Humans are visual dominant creatures, right? Because visuals are what is on our conscious mind and when you describe the world, our language is so visual, right? Even for sound, sometimes we use visual metaphors to describe things. So, that is part of it. And part of it is also that for sound, the physics is in many ways tougher because you have much longer wavelengths and you need to model wave diffraction, wave scattering and all these things to produce a believable simulation. And so, that is the physical aspect of it. And also, there’s a perceptual aspect. Our brain has evolved in a world where both audio/visual cues exist, and our brain is very clever. It goes for the physical aspects of both that give us separate information, unique information. So, visuals give you line-of-sight, high resolution, right? But audio is lower resolution directionally, but it goes around corners. It goes around rooms. That’s why if you put on your headphones and just listen to music at the loud volume, you are a danger to everybody on the street because you have no awareness.

Host: Right.

Nikunj Raghuvanshi: So, audio is the awareness part of it.

Host: That is fascinating because you’re right. What you can see is what is in front of you, but you could hear things that aren’t in front of you.

Nikunj Raghuvanshi: Yeah.

Host: You can’t see behind you, but you can hear behind you.

Nikunj Raghuvanshi: Absolutely, you can hear behind yourself and you can hear around stuff, around corners. You can hear stuff you don’t see, and that’s important for anticipating stuff.

Host: Right.

Nikunj Raghuvanshi: People coming towards you and things like that.

Host: So, there’s all kinds of people here that are working on 3D sound and head-related transfer functions and all that.

Nikunj Raghuvanshi: Yeah, Ivan’s group.

Host: Yeah! How is your work interacting with that?

Nikunj Raghuvanshi: So, that work is about, if I tell you the spatial sound field around your head, how does it translate into a personal experience in your two ears? So, the HRTF modeling is about that aspect. My work with John Snyder is about, how does the sound propagate in the world, right?

Host: Interesting.

Nikunj Raghuvanshi: So, if there is a sound down a hallway, what happens during the time it gets from there up to your head? That’s our work.

Host: I want you to give us a snapshot of the current state-of-the-art in computational acoustics and there’s apparently two main approaches in the field. What are they, and what’s the case for each and where do you land in this spectrum?

Nikunj Raghuvanshi: So, there’s a lot of work in room acoustics where people are thinking about, okay, what makes a concert hall sound great? Can you simulate a concert hall before you build it, so you know how it’s going to sound? And, based on the constraints on those areas, people have used a lot of ray tracing approaches which borrow on a lot of literature in graphics. And for graphics, ray tracing is the main technique, and it works really well, because the idea is you’re using a short wavelength approximation. So, light wavelengths are submicron and if they hit something, they get blocked. But the analogy I like to use is sound is very different, the wavelengths are much bigger. So, you can hold your thumb out in front of you and blot out the sun, but you are going to have a hard time blocking out the sound of thunder with a thumb held out in front of your ear because the waves will just wrap around. And, that’s what motivates our approach which is to actually go back to the physical laws and say, instead of doing the short wave length approximation for sound, we revisit and say, maybe sounds needs the more fundamental wave equation to be solved, to actually model these diffraction effects for us. The usual thinking is that, you know, in games, you are thinking about we want a certain set of perceptual cues. We want walls to occlude sound, we want a small room to reverberate less. We want a large hall to reverberate more. And the thought is, why are we solving this expensive partial differential equation again? Can’t we just find some shortcut to jump straight to the answer instead of going through this long-winded route of physics? And our answer has been that you really have to do all the hard work because there’s a ton of information that’s folded in and what seems easy to us as humans isn’t quite so easy for a computer and and there’s no neat trick to get you straight to the perceptual answer you care about.

(music plays)

Host: Much of the work in audio and acoustic research is focused on indoor sound where the sound source is within the line of sight and the audience and the listener can see what they were listening to…

Nikunj Raghuvanshi: Um-hum.

Host: …and you mentioned that the concert hall has a rich literature in this field. So, what’s the gap in the literature when we move from the concert hall to the computer, specifically in virtual environments?

Nikunj Raghuvanshi: Yeah, so games and virtual reality, the key demand they have is the scene is not one room, and with time it has become much more difficult. So, a concert hall is terrible if you can’t see the people who are playing the sound, right? So, it allows for a certain set of assumptions that work extremely nicely. The direct sound, which is the initial sound, which is perceptually very critical, just goes in a straight line from source to listener. You know the distance so you can just use a simple formula and you know exactly how loud the initial sound is at the person. But in a game scene, you will have multiple rooms, you’ll have caves, you’ll have courtyards, you’ll have all sorts of complex geometry and then people love to blow off roofs and poke holes into geometry all over the place. And within that, now sound is streaming all around the space and it’s making its way around geometry. And the question becomes, how do you compute even the direct sound? Even the initial sound’s loudness and direction, which are important? How do you find those? Quickly? Because you are on the clock and you have like 60, 100 sources moving around, and you have to compute all of that very quickly. So, that’s the challenge.

Host: All right. So, let’s talk about how you’re addressing it. A recent paper that you’ve published made some waves, sound waves probably. No pun intended… It’s called Parametric Directional Coding for Pre-computed Sound Propagation. Another mouthful. But it’s a great paper and the technology is so cool. Talk about this… research this that you’re doing.

Nikunj Raghuvanshi: Yeah. So, our main idea is, actually, to look at the literature in lighting again and see the kind of path they’d followed to kind of deliver this computational challenge of how you do these extensive simulations and still hit that stringent CPU budget in real time. And one of the key ideas is you precompute. You cheat. You just look at the scene and just compute everything you need to compute beforehand, right? Instead of trying to do it on the fly during the game. So, it does introduce the limitation that the scene has to be static. But then you can do these very nice physical computations and you can ensure that the whole thing is robust, it is accurate, it doesn’t suffer from all the sort of corner cases that approximations tend to suffer from, and you have your result. You basically have a giant look-up table. If somebody tells you that the source is over there and the listener is over here, tell me what the loudness of the sound would be. We just say okay, we this a giant table, we’ll just go look it up for you. And that is the main way we bring the CPU usage into control. But it generates a knock-off challenge that now we have this huge table, there’s this huge amount of data that we’ve stored and it’s 6-dimensional. The source can move in 3-dimensions and the listener can move in 3-dimensions. So, we have the giant table which is terabytes or even more on data.

Host: Yeah.

Nikunj Raghuvanshi: And the game’s typical budget is like 100 megabytes. So, the key challenge we’re facing is, how do we fit everything in that? How do we take this data and extract out something salient that people listen to and use that? So, you start with full computation, you start as close to nature as possible and then we’re saying okay, now what would a person hear out of this? Right? Now, let’s do that activity of, instead of doing a shortcut, now let’s think about okay, a person hears the directional sound comes from. If there is a doorway, the sound should come from the doorway. So, we pick out these perceptual parameters that are salient for human perception and then we store those. That’s the crucial way you kind of bring down this enormous data set and do a sort of memory budget that’s feasible.

Host: So, that’s the paper.

Nikunj Raghuvanshi: Um-hum.

Host: And how has it played out in practice, or in project, as it were?

Nikunj Raghuvanshi: So, a little bit of history on this is, we had a paper SIGGRAPH 2010, me and John Snyder and some academic collaborators, and at that point, we were trying to think of just physical accuracy. So, we took the physical data and we were trying to stay as close to physical reality as possible and we were rendering that. And around 2012, we got to talking with Gears of War, the studio, and we were going through what the budgets will be, how things would be. And we were like we need… this needs to… this is gigabytes, it needs to go to megabytes…

Host: Really?

Nikunj Raghuvanshi: …very quickly. And that’s when we were like, okay, let’s simplify. Like, what’s the four like most basic things that you really want from an acoustic system? Like walls should occlude sound and thing like that. So, we kind of re-winded and came to it from this perceptual viewpoint that I was just describing. Let’s keep only what’s necessary. And that’s how we were able to ship this in 2016 in Gears of War 4 by just re-winding and doing this process.

Host: How is that playing in to, you know… Project Triton is the big project that we’re talking about. How would you describe what that’s about and where it’s going? Is it everything you’ve just described or is there… other aspects to it?

Nikunj Raghuvanshi: Yeah. Project Triton is this idea that you should precompute the wave physics, instead of starting with approximations. Approximate later. That’s one idea of Project Triton. And the second is, if you want to make it feasible for real games and real virtual reality and augmented reality, switch to perceptual parameters. Extract that out of this physical simulation and then you have something feasible. And the path we are on now, which brings me back to the recent paper you mentioned…

Host: Right.

Nikunj Raghuvanshi: …is, in Gears of War, we shipped some set of parameters. We were like, these make a big difference. But one thing we lacked was if the sound is, say, in a different room and you are separated by a doorway, you would hear the right loudness of the sound, but its direction would be wrong. Its direction would be straight through the wall, going from source to listener.

Host: Interesting.

Nikunj Raghuvanshi: And that’s an important spatial cue. It helps you orient yourself when sounds funnel through doorways.

Host: Right.

Nikunj Raghuvanshi: Right? And it’s a cue that sound designers really look for and try to hand-tune to get good ambiances going. So, in the recent 2018 paper, that’s what we fixed. We call this portaling. It’s a made-up word for this effect of sounds going around doorways, but that’s what we’re modeling now.

Host: Is this new stuff? I mean, people have tackled these problems for a long time.

Nikunj Raghuvanshi: Yeah.

Host: Are you people the first ones to come up with this, the portaling and…?

Nikunj Raghuvanshi: I mean, the basic ideas have been around. People know that, perceptually, this is important, and there are approaches to try to tackle this, but I’d say, because we’re using wave physics, this problem becomes much easier because you just have the waves diffract around the edge. With ray tracing you face the difficult problem that you have to trace out the rays “intelligently” somehow to hit an edge, which is like hitting a bullseye, right?

Host: Right.

Nikunj Raghuvanshi: So, the ray can wrap around the edge. So, it becomes really difficult. Most practical ray tracing systems don’t try to deal with this edge diffraction effect because of that. Although there are academic approaches to it, in practice it becomes difficult. But as I worked on this over the years, I’ve kind of realized, these are the real advantages of this. Disadvantages are pretty clear: it’s slow, right? So, you have to precompute. But we’re realizing, over time, that going to physics has these advantages.

Host: Well, but the precompute part is innovative in terms of a thought process on how you would accomplish the speed-up…

Nikunj Raghuvanshi: There have been papers on precomputed acoustics, academically before, but this realization that mixing precomputation and extracting these perceptual parameters? That is a recipe that makes a lot of practical sense. Because a third thing that I haven’t mentioned yet is going to the perceptual domain, now the sound designer can make sense of the numbers coming out of this whole system. Because it’s loudness. It’s reverberation time, how long the sound is reverberating. And these numbers that are super-intuitive to sound designers, they already deal with them. So, now what you are telling them is, hey, you used to start with a blank world, which had nothing, right? Like the world before the act of creation, there’s nothing. It’s just empty space and you are trying to make things reverberate this way or that, now you don’t need to do that. Now physics will execute first ,on the actual scene with the actual materials, and then you can say I don’t like what physics did here or there, let me tweak it, let me modify what the real result is and make it meet the artistic goals I have for my game.

(music plays)

Host: We’ve talked about indoor audio modeling, but let’s talk about the outdoors for now and the computational challenges to making natural outdoor sounds, sound convincing.

Nikunj Raghuvanshi: Yeah.

Host: How have people hacked it in the past and how does your work in ambient sound propagation move us forward here?

Nikunj Raghuvanshi: Yeah, we’ve hacked it in the past! Okay. This is something we realized on Gears of War because the parameters we use were borrowed, again, from the concert hall literature and, because they’re parameters informed by concert halls, things sound like halls and rooms. Back in the days of Doom, this tech would have been great because it was all indoors and rooms, but in Gears of War, we have these open spaces and it doesn’t sound quite right. Outdoors sounds like a huge hall and you know, how do we do wind ambiances and rain that’s outdoors? And so, we came up with a solution for them at that time which we called “outdoorness.” It’s, again, an invented word.

Host: Outdoorness.

Nikunj Raghuvanshi: Outdoorness.

Host: I’m going to use that. I like it.

Nikunj Raghuvanshi: Because the idea it’s trying to convey is, it’s not a binary indoor/outdoor. When you are crossing a doorway or a threshold, you expect a smooth transition. You expect that, I’m not hearing rain inside, I’m feeling nice and dry and comfortable and now I’m walking into the rain…

Host: Yeah.

Nikunj Raghuvanshi: …and you want the smooth transition on it. So, we built a sort of custom tech to do that outdoor transition. But it got us thinking about, what’s the right way to do this? How do you produce the right sort of spatial impression of, there’s rain outside, it’s coming through a doorway, the doorway is to my left, and as you walk, it spreads all around you. You are standing in the middle of rain now and it’s all around you. So, we wanted to create that experience. So, the ambient sound propagation work was an intern project and now we finished it up with our collaborators in Cornell. And that was about, how do you model extended sound sources? So, again, going back to concert halls, usually people have dealt with point-like sources which might have a directivity pattern. But rain is like a million little drops. If you try to model each and every drop, that’s not going to get you anywhere. So, that’s what the paper is about, how to treat it as one aggregate that somebody gave us? And we produce an aggregate sort of energy distribution of that thing along with this directional characteristics and just encode that.

Host: And just encode it.

Nikunj Raghuvanshi: And just encode it.

Host: How is it working?

Nikunj Raghuvanshi: It works nice. It sounds good. To my ears it sounds great.

Host: Well you know, and you’re the picky one, I would imagine.

Nikunj Raghuvanshi: Yeah. I’m the picky one and also when you are doing iterations for a paper, you also completely lose objectivity at some point. So, you’re always looking for others to get some feedback.

Host: Here, listen to this.

Nikunj Raghuvanshi: Well, reviewers give their feedback, so, yeah.

Host: Sure. Okay. Well, kind of riffing on that, there’s another project going on that I’d love for you to talk as much as you can about called Project Acoustics and kind of the future of where we’re going with this. Talk about that.

Nikunj Raghuvanshi: That’s really exciting. So, up to now, Project Triton was an internal tech which we managed to propagate from research into actual Microsoft product, internally.

Host: Um-hum.

Nikunj Raghuvanshi: Project Acoustics is being led by Noel Cross’s team in Azure Cognition. And what they’re doing is turning it into a product that’s externally usable. So, trying to democratize this technology so it can be used by any game audio team anywhere backed by Azure compute to do the precomputation.

Host: Which is key, the Azure compute.

Nikunj Raghuvanshi: Yeah, because you know, it took us a long time, with Gears of War to figure out, okay, where is all this precompute going to happen?

Host: Right.

Nikunj Raghuvanshi: We had to figure out the whole cluster story for themselves, how to get the machines, how to procure them, and there’s a big headache of arranging compute for yourself. And so that’s, logistically, a key problem that people face when they try to think of precomputed acoustics. The run-time side, Project Acoustics, we are going to have plug-ins for all the standard game audio engines and everything. So, that makes things simpler on that side. But a key blocker in my view was always this question of, where are you going to precompute? So, now the answer is simple. You get your Azure badge account and you just send your stuff up there and it just computes.

Host: Send it to the cloud and the cloud will rain it back down on you.

Nikunj Raghuvanshi: Yes. It will send down data.

Host: Who is your audience for Project Acoustics?

Nikunj Raghuvanshi: Project Acoustics, the audience is the whole game audio industry. And our real hope is that we’ll see some uptake on it when we announce it at GDC in March, and we want people to use it, as many teams, small, big, medium, everybody, to start using this because we feel there’s a positive feedback loop that can be set up where you have these new tools available, designers realize that they have these new tools available that have shipped in Triple A games, so they do work. And for them to give us feedback. If they use these tools, we hope that they can produce new audio experiences that are distinctly different so that then they can say to their tech director, or somebody, for the next game, we need more CPU budget. Because we’ve shown you value. So, a big exercise was how to fit this within current budgets so people can produce these examples of novel possible experiences so they can argue for more. So, to increase the budget for audio and kind of bring it on par with graphics over time as you alluded to earlier.

Host: You know, if we get nothing across in this podcast, it’s like, people, pay attention to good audio. Give it its props. Because it needs it. Let’s talk briefly about some of the other applications for computational acoustics. Where else might it be awesome to have a layer of realism with audio computing?

Nikunj Raghuvanshi: One of the applications that I find very exciting is for audio rendering for people who are blind. I had the opportunity to actually show the demo of our latest system to Daniel Kish, who, if you don’t know, he’s the human echo-locator. And he uses clicks from his mouth to actually locate geometry around him and he’s always oriented. He’s an amazing person. So that was a collaboration, actually, we had with a team in the Garage. They released a game called Ear Hockey and it was a nice collaboration, like there was a good exchange of ideas over there. That’s nice because I feel that’s a whole different application where it can have a potential social positive impact. The other one that’s very interesting to me is that we lived in 2-D desktop screens for a while and now computing is moving into the physical world. That’s the sort of exciting thing about mixed reality, is moving compute out into this world. And then the acoustics of the real world being folded into the sounds of virtual objects becomes extremely important. If something virtual is right behind the wall from you, you don’t want to listen to it with full loudness. That would completely break the realism of something being situated in the real world. So, from that viewpoint, good light transport and good sound propagation are both required things for the future compute platform in the physical world. So that’s a very exciting future direction to me.

(music plays)

Host: It’s about this time in the podcast I ask all my guests the infamous “what keeps you up at night?” question. And when you and I talked before, we went down kind of two tracks here, and I felt like we could do a whole podcast on it, but sadly we can’t… But let’s talk about what keeps you up at night. Ironically to tee it up here, it deals with both getting people to use your technology…

Nikunj Raghuvanshi: Um-hum.

Host: And keeping people from using your technology.

Nikunj Raghuvanshi: No! I wanted everybody to use the technology. But I’d say like five years ago, what used to keep me up at night is like, how are we going to ship this thing in Gears of War? Now what’s keeping me up at night is how do we make Project Acoustics succeed and how do we you know expand the adoption of it and, in a small way, try to improve, move the game audio industry forward a bit and help artists do the artistic expression they need to do in games? So, that’s what I’m thinking right now, how can we move things forward in that direction? I frankly look at video games as an art form. And I’ve gamed a lot in my time. To be honest, all of it wasn’t art, I was enjoying myself a lot and I wasted some time playing games. But we all have our ways to unwind and waste time. But good games can be amazing. They can be much better than a Hollywood movie in terms of what you leave them with. And I just want to contribute in my small way to that. Giving artists the tools to maybe make the next great story, you know.

Host: All right. So, let’s do talk a little bit, though, about this idea of you make a really good game…

Nikunj Raghuvanshi: Um-hum.

Host: Suddenly, you’ve got a lot of people spending a lot of time. I won’t say wasting. But we have to address the nature of gaming, and the fact that there are you know… you’re upstream of it. You are an artist, you are a technologist, you are a scientist…

Nikunj Raghuvanshi: Um-hum.

Host: And it’s like I just want to make this cool stuff.

Nikunj Raghuvanshi: Yeah.

Host: Downstream, it’s people want people to use it a lot. So, how do you think about that and the responsibilities of a researcher in this arena?

Nikunj Raghuvanshi: Yeah. You know, this reminds me of Kurt Vonnegut’s book, Cat’s Cradle? He kind of makes – what there’s scientist who makes Ice 9 and it freezes the whole planet or something. So, you see things about video games in the news and stuff. But I frankly feel that the kind of games I’ve participated in making, these games are very social experiences. People meet on the games a lot. Like Sea of Thieves is all about, you get a bunch of friends together, you’re sitting on the couch together, and you’re just going crazy like on these pirate ships and trying to just have fun. So, they are not the sort of games where a person is being separated from society by the act of gaming and just is immersed in the screen and is just not participating in the world. They are kind of the opposite. So, games have all these aspects. And so, I personally feel pretty good about the games I’ve contributed to. I can at least say that.

Host: So, I like to hear personal stories of the researchers that come on the podcast. So, tell us a little bit about yourself. When did you know you wanted to do science for a living and how did you go about making that happen?

Nikunj Raghuvanshi: Science for a living? I was the guy in 6th grade who’d get up and say I want to be a scientist. So, that was then, but what got me really hooked was graphics, initially. Like I told you, I found the book which had these color plates and I was like, wow, that’s awesome! So, I was at UNC Chapel Hill, graphics group, and I studied graphics for my graduate studies. And then, in my second or third year, my advisor, Ming Lin, she does a lot of research in physical simulations. How do we make water look nice in physical simulations? Lots of it is CGI. How do you model that? How do you model cloth? How do you model hair? So, there’s all this physics for that. And so, I took a course with her and I was like, you know what? I want to do audio because you get a different sense, right? It’s simulation, not for visuals, but you get to hear stuff. I’m like okay, this is cool. This is different. So, I did a project with her and I published a paper on sound synthesis. So, like how rigid bodies, like objects rolling and bouncing around and sliding make sound, just from physical equations. And I found a cool technique and I was like okay, let me do acoustics with this. It’s going to be fun. And I’m going to publish another paper in a year. And here I am, still trying to crack that problem of how to do acoustics in spaces!

Host: Yeah, but what a place to be. And speaking of that, you have a really interesting story about how you ended up at Microsoft Research and brought your entire PhD code base with you.

Nikunj Raghuvanshi: Yeah. It was an interesting time. So, when I was graduating, MSR was my number one choice because I was always thinking of this technology as, it would be great if games used this one day. This is the sort of thing that would have a good application in games. And then, around that time, I got hired to MSR and it was a multicore incubation back then, my group was looking at how do these multicore systems enable all sorts of cool new things? And one of the things my hiring manager was looking at was how can we do physically based sound synthesis and propagation. So, that’s what my PhD was, so they licensed the whole code base and I built on that.

Host: You don’t see that very often.

Nikunj Raghuvanshi: Yeah, it was nice.

Host: That’s awesome. Well, Nikunj, as we close, I always like to ask guests to give some words of wisdom or advice or encouragement, however it looks to you. What would you say to the next generation of researchers who might want to make sound sound better?

Nikunj Raghuvanshi: Yeah, it’s an exciting area. It’s super-exciting right now. Because even like just to start from more technical stuff, there are so many problems to solve with acoustic propagation. I’d say we’ve taken just the first step of feasibility, maybe a second one with Project Acoustics, but we’re right at the beginning of this. And we’re thinking there are so many missing things, like outdoors is one thing that we’ve kind of fixed up a bit, but we’re going towards what sorts of effects can you model in the future? Like directional sources is one we’re looking at, but there are so many problems. I kind of think of it as the 1980s of graphics when people first figured out that you can make this work. You can make light propagation work. What are the things that you need to do to make it ever closer to reality? And we’re still at it. So, I think we’re at that phase with acoustics. We’ve just figured out this is one way that you can actually ship in practical applications and we know there are deficiencies in its realism in many, many places. So, I think of it as a very rich area that students can jump in and start contributing.

Host: Nowhere to go but up.

Nikunj Raghuvanshi: Yes. Absolutely!

Host: Nikunj Raghuvanshi, thank you for coming in and talking us today.

Nikunj Raghuvanshi: Thanks for having me.

(music plays)

To learn more about Dr. Nikunj Raghuvanshi and the science of sound simulation, visit Microsoft.com/research