Posted on Leave a comment

Meet the Microsoft Pluton processor – the security chip designed for the future of Windows PCs

The role of the Windows PC and trust in technology are more important than ever as our devices keep us connected and productive across work and life. Windows 10 is the most secure version of Windows ever, built with end-to-end security for protection from the edge to the cloud all the way down to the hardware. Advancements like Windows Hello biometric facial recognition, built-in Microsoft Defender Antivirus, and firmware protections and advanced system capabilities like System Guard, Application Control for Windows and more have helped Microsoft keep pace with the evolving threat landscape.

While cloud-delivered protections and AI advancements to the Windows OS have made it increasingly more difficult and expensive for attackers, they are rapidly evolving, moving to new targets: the seams between hardware and software that can’t currently be reached or monitored for breaches. We have already taken steps to combat these sophisticated cybercriminals and nation state actors with our partners through innovations like secured-core PCs that offer advanced identity, OS, and hardware protection.

Today, Microsoft alongside our biggest silicon partners are announcing a new vision for Windows security to help ensure our customers are protected today and in the future. In collaboration with leading silicon partners AMD, Intel, and Qualcomm Technologies, Inc., we are announcing the Microsoft Pluton security processor. This chip-to-cloud security technology, pioneered in Xbox and Azure Sphere, will bring even more security advancements to future Windows PCs and signals the beginning of a journey with ecosystem and OEM partners.

Our vision for the future of Windows PCs is security at the very core, built into the CPU, where hardware and software are tightly integrated in a unified approach designed to eliminate entire vectors of attack. This revolutionary security processor design will make it significantly more difficult for attackers to hide beneath the operating system, and improve our ability to guard against physical attacks, prevent the theft of credential and encryption keys, and provide the ability to recover from software bugs.

Pluton design redefines Windows security at the CPU

Today, the heart of operating system security on most PCs lives in a chip separate from the CPU, called the Trusted Platform Module (TPM). The TPM is a hardware component which is used to help securely store keys and measurements that verify the integrity of the system. TPMs have been supported in Windows for more than 10 years and power many critical technologies such as Windows Hello and BitLocker. Given the effectiveness of the TPM at performing critical security tasks, attackers have begun to innovate ways to attack it, particularly in situations where an attacker can steal or temporarily gain physical access to a PC. These sophisticated attack techniques target the communication channel between the CPU and TPM, which is typically a bus interface. This bus interface provides the ability to share information between the main CPU and security processor, but it also provides an opportunity for attackers to steal or modify information in-transit using a physical attack.

The Pluton design removes the potential for that communication channel to be attacked by building security directly into the CPU. Windows PCs using the Pluton architecture will first emulate a TPM that works with the existing TPM specifications and APIs, which will allow customers to immediately benefit from enhanced security for Windows features that rely on TPMs like BitLocker and System Guard. Windows devices with Pluton will use the Pluton security processor to protect credentials, user identities, encryption keys, and personal data. None of this information can be removed from Pluton even if an attacker has installed malware or has complete physical possession of the PC.

This is accomplished by storing sensitive data like encryption keys securely within the Pluton processor, which is isolated from the rest of the system, helping to ensure that emerging attack techniques, like speculative execution, cannot access key material. Pluton also provides the unique Secure Hardware Cryptography Key (SHACK) technology that helps ensure keys are never exposed outside of the protected hardware, even to the Pluton firmware itself, providing an unprecedented level of security for Windows customers.

The Pluton security processor complements work Microsoft has done with the community, including Project Cerberus, by providing a secure identity for the CPU that can be attested by Cerberus, thus enhancing the security of the overall platform.

Graphic showing the Microsoft Pluton security processor

One of the other major security problems solved by Pluton is keeping the system firmware up to date across the entire PC ecosystem. Today customers receive updates to their security firmware from a variety of different sources than can be difficult to manage, resulting in widespread patching issues.  Pluton provides a flexible, updateable platform for running firmware that implements end-to-end security functionality authored, maintained, and updated by Microsoft. Pluton for Windows computers will be integrated with the Windows Update process in the same way that the Azure Sphere Security Service connects to IoT devices.

The fusion of Microsoft’s OS security improvements, innovations like secured-core PCs and Azure Sphere, and hardware innovation from our silicon partners provides the capability for Microsoft to protect against sophisticated attacks across Windows PCs, the Azure cloud, and Azure intelligent edge devices.

Innovating with our partners to enhance chip-to-cloud security

The PC owes its success largely to an immensely vibrant ecosystem with OS, silicon, and OEM partners all working together to solve tough problems through collaborative innovation. This was demonstrated over 10 years ago with the successful introduction of the TPM, the first broadly available hardware root of trust. Since that milestone, Microsoft and partners have continued to collaborate on next generation security technologies that take full advantage of the latest OS and silicon innovations to solve the most challenging problems in security. This better together approach is how we intend to make the PC ecosystem the most secure available.

The Microsoft Pluton design technology incorporates all of the learnings from delivering hardware root-of-trust-enabled devices to hundreds of millions of PCs. The Pluton design was introduced as part of the integrated hardware and OS security capabilities in the Xbox One console released in 2013 by Microsoft in partnership with AMD and also within Azure Sphere. The introduction of Microsoft’s IP technology directly into the CPU silicon helped guard against physical attacks, prevent the discovery of keys, and provide the ability to recover from software bugs.

With the effectiveness of the initial Pluton design we’ve learned a lot about how to use hardware to mitigate a range of physical attacks. Now, we are taking what we learned from this to deliver on a chip-to-cloud security vision to bring even more security innovation to the future of Windows PCs (more details in this talk from Microsoft BlueHat). Azure Sphere leveraged a similar security approach to become the first IoT product to meet the “Seven properties of highly secure devices.”

The shared Pluton root-of-trust technology will maximize the health and security of the entire Windows PC ecosystem by leveraging the security expertise and technologies from the companies involved. The Pluton security processor will provide next generation hardware security protection to Windows PCs through future chips from AMD, Intel, and Qualcomm Technologies.

“At AMD, security is our top priority and we are proud to have been at the forefront of hardware security platform design to support features that help safeguard users from the most sophisticated attacks. As a part of that vigilance, AMD and Microsoft have been closely partnering to develop and continuously improve processor-based security solutions, beginning with the Xbox One console and now in the PC. We design and build our products with security in mind and bringing Microsoft’s Pluton technology to the chip level will enhance the already strong security capabilities of our processors.” – Jason Thomas, head of product security, AMD

“Intel continues to partner with Microsoft to advance the security of Windows PC platforms. The introduction of Microsoft Pluton into future Intel CPUs will further enable integration between Intel hardware and the Windows operating system.” – Mike Nordquist, Sr. Director, Commercial Client Security, Intel

“Qualcomm Technologies is pleased to continue its work with Microsoft to help make a slew of devices and use cases more secure. We believe an on-die, hardware-based Root-of-Trust like the Microsoft Pluton is an important component in securing multiple use cases and the devices enabling them.” – Asaf Shen, senior director of product management at Qualcomm Technologies, Inc.

We believe that processors with built-in security like Pluton are the future of computing hardware. With Pluton, our vision is to provide a more secure foundation for the intelligent edge and the intelligent cloud by extending this level of built-in trust to devices, and things everywhere.

Our work with the community helps Microsoft continuously innovate and enhance security at every layer. We’re excited to make this revolutionary security design a reality with the biggest names in the silicon industry as we continuously work to enhance security for all.

Posted on Leave a comment

Securing remote employees with an internet-first model and Zero Trust

Like many this year, our Microsoft workforce had to quickly transition to a work from the home model in response to COVID-19. While nobody could have predicted the world’s current state, it has provided a very real-world test of the investments we have made implementing a Zero Trust security model internally. We had about 97 percent of our workforce at the peak successfully working from home, either on a Microsoft issued or personal device. 

Much of the credit for this success goes to the Zero Trust journey we started over three years ago. Zero Trust has been critical in making this transition to a work-from-home model relatively friction-free. One of the major components to our Zero Trust implementation is ensuring our employees have access to applications and resources regardless of their location. We enable employees to be productive from anywhere, whether they’re at home, a coffee shop, or at the office.  

To make this happen, we needed to make sure most of our resources were accessible over any internet connection. The preferred method to achieve this is through modernizing applications and services using the cloud and modern authentication systems. For legacy applications or services unable to migrate to the cloud, we use an application proxy service which serves as a broker to connect to the on-premise environment while still enforcing strong authentication principles.  

Strong authentication and adaptive access policies are critical components in the validation process. A big part of this validation process included enrolling devices in our device management system to ensure only known and healthy devices are directly accessing our resources. For users on devices that are not enrolled in our management system, we have developed virtualization options that allow them to access resources on an unmanaged device. One of the early impacts of COVID-19 was device shortages and the inability to procure new hardware. Our virtualization implementation also helped provide secure access for new employees while they waited for their device’s arrival. 

The output of these efforts, combined with a VPN configuration that enables split tunneling for access to the few remaining on-premises applications, has made it possible for Microsoft employees to work anywhere in a time when it is most critical. 

Implementing an internet-first model for your applications

In this blog, I will share some recommendations on implementing an internet-first approach plus a few of the things we learned in our efforts here at Microsoft. Because every company has its own unique culture, environments, infrastructure, and threshold for change, there is no one-size-fits-all approach. Hopefully, you will find some of this information useful, even if only to validate you are already on the right path. 

Before I jump in, I just want to mention that this blog will assume you’ve completed some of the foundational elements needed for a Zero Trust security model. These include modernizing your identity system, verifying sign-ins with multi-factor authentication (MFA), registering devices, and ensuring compliance with IT security policies, etc. Without these protections in place, moving to an internet-first posture is not possible. 

As previously mentioned, your apps will need to be modernized by migrating them to the cloud and implementing modern authentication services. This is the optimal path to internet accessibility. For apps that can’t be modernized or moved to the cloud (think legacy on-premises apps), you can leverage an app proxy to allow the connection over the internet and still maintain the strong authentication principles. 

Secure access via adaptive access policies

 Once your apps are accessible via the public internet, you will want to control access based on conditions you select to enforce. At Microsoft, we use Conditional Access policies to enforce granular access control, such as requiring multi-factor authentication, based upon user context, device, location, and session risk information. We also enforce device management and health policies to ensure the employee comes from a known and healthy device once they have successfully achieved strong authentication. 

 Depending on your organization’s size, you might want to start slow by implementing multi-factor authentication and device enrollment first, then ramping up to biometric authentication and full device health enforcement. Check out our Zero Trust guidance for identities and devices that we follow internally for some additional recommendations. 

 When we rolled out our device enrollment policy, we learned that using data to measure the policy’s impact allowed us to tailor our messaging and deployment schedule. We enabled “logging mode”, which let us enable the policies and collect data on who would be impacted when we moved to enforcement. Using this data, we first targeted users who were already using compliant devices. For users that we knew were going to be impacted, we crafted targeted messaging alerting them of the upcoming changes and how they would be impacted. This slower, more measured deployment approach allowed us to monitor and respond to issues more quickly. Using this data to shape our rollout helped us minimize the impact of significant policy implementation. 

Start with a hero application

Picking your first application to move out to the public internet can be done in a few different waysDo you want to start with something small and non-critical? Or perhaps you want to “flip the switch” to cover everything at once? We decided to start with a hero application that proved it works at scale. Office 365 was the obvious choice because it provided the broadest coverage since most employees use it daily, regardless of what role they are in. We were confident if we could implement Office 365 successfully, we could be successful with most of our portfolio. 

Ultimately, it will boil down to your environment, threshold for support engagements, and company culture. Choose the path that works best for you and push forward. All paths will help provide valuable data and experience that will help later 

Prioritize your remaining apps and services

Prioritizing the apps and services you modernize next can be challenging, especially without granular visibility into what employees are accessing in your environment. When we began our journey, we had theories about what people were accessing but no data to back it up. We built a dashboard that reported actual traffic volumes to applications and services still routing to on-premises applications and services to provide the visibility we lacked. This gave us much-needed information to help prioritize apps and services based on impact, complexity, risk, and more.  

We also used this dashboard to identify which application or service owners we needed to coordinate with to modernize their resources. To coordinate with these owners, we created work items in our task tracking system and assigned the owner a deadline to provide a plan to either modernize or implement a proxy front end solution. We also created a tracking dashboard for all these tasks and their status to make reporting easier.  

We then worked closely with owners to provide guidance and best practices to drive their success. We conduct weekly office hours where application and service owners can ask questions. The partnership between these application and service owners and the teams working on Zero Trust helps us all drive towards the same common goals—frictionless access for our employees. 

A quick note on what we learned through the dashboard—the on-premises applications and services people were still accessing were not what we were expecting. The dashboard surfaced several items we were unaware people were still using. Fortunately, the dashboard helped remove a layer of fog we were unaware even existed and has been invaluable in driving our prioritization efforts. 

As I mentioned at the beginning of this blog, every company is unique. As such, how you think about Zero Trust and your investments might be different than the company across the street. I hope some of the insight provided above was helpful, even if it is just to get you thinking about how you would approach solving some of these challenges inside your own organization 

 To learn more about how Microsoft IT (Information Technology), check out IT ShowcaseTo learn more about Microsoft Security Solutionsvisit our websiteBookmark theSecurity blogto keep up with our expert coverage on security matters. Also, follow us at@MSFTSecurityfor the latest news on cybersecurity.  

Posted on Leave a comment

Cloud crime investigator describes what it takes to fight ransomware and botnets

Ransomware is a type of malware that holds computer systems or data hostage with demands for payment. And it has been used against a wide variety of targets, including governments, businesses and health care facilities. Ransomware distributors are also part of a wider web of digital menace that has threatened election security.

In October 2020, the Microsoft Digital Crimes Unit worked with a coalition of partners to disrupt Trickbot, one of the most infamous botnets and prolific distributors of ransomware. Botnets are networks of computers infected by malware and being used to commit cybercrimes. While disrupting a botnet is challenging work and success varies over time, Microsoft and its partners were able to disrupt 94% of Trickbot’s critical operational infrastructure in six days.

Jason Lyons is a malware and cloud crime investigator at the Microsoft DCU and part of a team that disrupted Trickbot. We caught up with Jason to find out more about this critical work. Below is an edited version of our conversation.

What is the Microsoft Digital Crimes Unit?

I don’t think there’s another organization in private industry with the same structure, components and skill sets. It sits within the Customer Security and Trust team at Microsoft, and it comprises many different jobs and skills, including lawyers and paralegals, cyber analysts, security researchers and investigators, like myself, as well as the engineers who help build the tools we need. We have about 65 people working around the globe – at Microsoft’s Redmond, Washington, headquarters, Asia, Europe and South America.

How do people come to work in the team? What did you do beforehand?

I used to be a special agent in the U.S. Army, doing counterintelligence work. Our investigators come from many different backgrounds. One of my colleagues in DCU was a colonel in the Army, working in the communications sector. Another investigator who joined DCU recently was a computer scientist for the FBI. Another is an attorney. So there are lots of different backgrounds on the team.

[READ MORE: Protecting democracy, especially in a time of crisis]

What does your work at the DCU involve?

Within my team, which is one of about four within the DCU, we carry out about two or three major botnet disruptions – like the Trickbot operation – a year. Then we work with product teams within Microsoft, particularly Microsoft Defender and Office 365, to ensure we’re on top of current threats, as well as tackling any internal security issues. Our goal is to stop the spread of malware and protect our customers and users of the internet.

The DCU tackles the biggest threats in the ecosystem. We primarily focus on those that are having the biggest impact on our customers … or as important to a partner like FS-ISAC [the financial services cyber intelligence sharing body], which represents financial institutions all over the world. Trickbot was brought to our attention because of its antivirus (AV) tampering – once it infects a system, it has the ability to turn off the AV product.

When a case is referred to the DCU, what happens next?

They’re usually brought to us by someone inside our product group saying, “Hey, this is a significant problem for us.” We evaluate the issue, looking at the impact it could have not just on Microsoft but more widely, too. We ask whether there’s infrastructure to disrupt, where the bad guys are located and where they’re hosting their servers – and if we can get a U.S. court order to disrupt them. Or, will we need international partners and possibly international judicial orders, all of which is possible with our global team. Then we investigate how the botnet operates – is there a vulnerability we can exploit to disconnect the criminals from the victim machines and cause a significant disruption?

How are these weaknesses identified?

We build automated systems to dissect the information in the files that the botnet sends out. Then we’ll take it into our malware lab to really find out how it works. We want to know how it infects the operating system and what it does next. What security protocols does it turn off? And we look at how it communicates – probably the most important thing is what the communication between the command and control server and the victim looks like.

Posted on Leave a comment

Web of Trust, Part 2: Tutorial

The previous article looked at how the Web of Trust works in concept, and how the Web of Trust is implemented at Fedora. In this article, you’ll learn how to do it yourself. The power of this system lies in everybody being able to validate the actions of others—if you know how to validate somebody’s work, you’re contributing to the strength of our shared security.

Choosing a project

Remmina is a remote desktop client written in GTK+. It aims to be useful for system administrators and travelers who need to work with lots of remote computers in front of either large monitors or tiny netbooks. In the current age, where many people must work remotely or at least manage remote servers, the security of a program like Remmina is critical. Even if you do not use it yourself, you can contribute to the Web of Trust by checking it for others.

The question is: how do you know that a given version of Remmina is good, and that the original developer—or distribution server—has not been compromised?

For this tutorial, you’ll use Flatpak and the Flathub repository. Flatpak is intentionally well-suited for making verifiable rebuilds, which is one of the tenets of the Web of Trust. It’s easier to work with since it doesn’t require users to download independent development packages. Flatpak also uses techniques to prevent in‑flight tampering, using hashes to validate its read‑only state. As far as the Web of Trust is concerned, Flatpak is the future.

For this guide, you use Remmina, but this guide generally applies to every application you use. It’s also not exclusive to Flatpak, and the general steps also apply to Fedora’s repositories. In fact, if you’re currently reading this article on Debian or Arch, you can still follow the instructions. If you want to follow along using traditional RPM repositories, make sure to check out this article.

Installing and checking

To install Remmina, use the Software Center or run the following from a terminal:

flatpak install flathub org.remmina.Remmina -y

After installation, you’ll find the files in:

/var/lib/flatpak/app/org.remmina.Remmina/current/active/files/ 

Open a terminal here and find the following directories using ls -la:

total 44
drwxr-xr-x. 2 root root 4096 Jan 1 1970 bin
drwxr-xr-x. 3 root root 4096 Jan 1 1970 etc
drwxr-xr-x. 8 root root 4096 Jan 1 1970 lib
drwxr-xr-x. 2 root root 4096 Jan 1 1970 libexec
-rw-r--r--. 2 root root 18644 Aug 25 14:37 manifest.json
drwxr-xr-x. 2 root root 4096 Jan 1 1970 sbin
drwxr-xr-x. 15 root root 4096 Jan 1 1970 share

Getting the hashes

In the bin directory you will find the main binaries of the application, and in lib you find all dependencies that Remmina uses. Now calculate a hash for ./bin/remmina:

sha256sum ./bin/*

This will give you a list of numbers: checksums. Copy them to a temporary file, as this is the current version of Remmina that Flathub is distributing. These numbers have something special: only an exact copy of Remmina can give you the same numbers. Any change in the code—no matter how minor—will produce different numbers.

Like Fedora’s Koji and Bodhi build and update services, Flathub has all its build servers in plain view. In the case of Flathub, look at Buildbot to see who is responsible for the official binaries of a package. Here you will find all of the logs, including all the failed builds and their paper trail.

Illustration image, which shows the process-graph of Buildbot on Remmina.

Getting the source

The main Flathub project is hosted on GitHub, where the exact compile instructions (“manifest” in Flatpak terms) are visible for all to see. Open a new terminal in your Home folder. Clone the instructions, and possible submodules, using one command:

git clone --recurse-submodules https://github.com/flathub/org.remmina.Remmina

Developer tools

Start off by installing the Flatpak Builder:

sudo dnf install flatpak-builder

After that, you’ll need to get the right SDK to rebuild Remmina. In the manifest, you’ll find the current SDK is.

 "runtime": "org.gnome.Platform", "runtime-version": "3.38", "sdk": "org.gnome.Sdk", "command": "remmina",

This indicates that you need the GNOME SDK, which you can install with:

flatpak install org.gnome.Sdk//3.38

This provides the latest versions of the Free Desktop and GNOME SDK. There are also additional SDK’s for additional options, but those are beyond the scope of this tutorial.

Generating your own hashes

Now that everything is set up, compile your version of Remmina by running:

flatpak-builder build-dir org.remmina.Remmina.json --force-clean

After this, your terminal will print a lot of text, your fans will start spinning, and you’re compiling Remmina. If things do not go so smoothly, refer to the Flatpak Documentation; troubleshooting is beyond the scope of this tutorial.

Once complete, you should have the directory ./build-dir/files/, which should contain the same layout as above. Now the moment of truth: it’s time to generate the hashes for the built project:

sha256sum ./bin/*
Illustrative image, showing the output of sha256sum. To discourage copy-pasting old hashes, they are not provided as in-text.

You should get exactly the same numbers. This proves that the version on Flathub is indeed the version that the Remmina developers and maintainers intended for you to run. This is great, because this shows that Flathub has not been compromised. The web of trust is strong, and you just made it a bit better.

Going deeper

But what about the ./lib/ directory? And what version of Remmina did you actually compile? This is where the Web of Trust starts to branch. First, you can also double-check the hashes of the ./lib/ directory. Repeat the sha256sum command using a different directory.

But what version of Remmina did you compile? Well, that’s in the Manifest. In the text file you’ll find (usually at the bottom) the git repository and branch that you just used. At the time of this writing, that is:

 "type": "git", "url": "https://gitlab.com/Remmina/Remmina.git", "tag": "v1.4.8", "commit": "7ebc497062de66881b71bbe7f54dabfda0129ac2"

Here, you can decide to look at the Remmina code itself:

git clone --recurse-submodules https://gitlab.com/Remmina/Remmina.git cd ./Remmina git checkout tags/v1.4.8

The last two commands are important, since they ensure that you are looking at the right version of Remmina. Make sure you use the corresponding tag of the Manifest file. you can see everything that you just built.

What if…?

The question on some minds is: what if the hashes don’t match? Quoting a famous novel: “Don’t Panic.” There are multiple legitimate reasons as to why the hashes do not match.

It might be that you are not looking at the same version. If you followed this guide to a T, it should give matching results, but minor errors will cause vastly different results. Repeat the process, and ask for help if you’re unsure if you’re making errors. Perhaps Remmina is in the process of updating.

But if that still doesn’t justify the mismatch in hashes, go to the maintainers of Remmina on Flathub and open an issue. Assume good intentions, but you might be onto something that isn’t totally right.

The most obvious upstream issue is that Remmina does not properly support reproducible builds yet. The code of Remmina needs to be written in such a way that repeating the same action twice, gives the same result. For developers, there is an entire guide on how to do that. If this is the case, there should be an issue on the upstream bug-tracker, and if it is not there, make sure that you create one by explaining your steps and the impact.

If all else fails, and you’ve informed upstream about the discrepancies and they to don’t know what is happening, then it’s time to send an email to the Administrators of Flathub and the developer in question.

Conclusion

At this point, you’ve gone through the entire process of validating a single piece of a bigger picture. Here, you can branch off in different directions:

  • Try another Flatpak application you like or use regularly
  • Try the RPM version of Remmina
  • Do a deep dive into the C code of Remmina
  • Relax for a day, knowing that the Web of Trust is a collective effort

In the grand scheme of things, we can all carry a small part of responsibility in the Web of Trust. By taking free/libre open source software (FLOSS) concepts and applying them in the real world, you can protect yourself and others. Last but not least, by understanding how the Web of Trust works you can see how FLOSS software provides unique protections.

Posted on Leave a comment

New podcast explores the people and AI powering Microsoft Security solutions

It’s hard to keep pace with all the changes happening in the world of cybersecurity. Security experts and leaders must continue learning (and unlearning) to stay ahead of the ever-evolving threat landscape. In fact, many of us are in this field because of our desire to continuously challenge ourselves and serve the greater good.

So many of the advancements in security are now utilizing this amorphous, at times controversial, and complex term called “artificial intelligence” (AI). Neural networks, clustering, fuzzy logic, heuristics, deep learning, random forests, adversarial machine learning (ML), unsupervised learning. These are just a few of the concepts that are being actively researched and utilized in security today.

But what do these techniques do? How do they work? What are the benefits? As security professionals, we know you have these questions, and so we decided to create Security Unlocked, a new podcast launching today, to help unlock (we promise not to overuse this pun) insights into these new technologies and the people creating them.

In each episode, hosts Nic Fillingham and Natalia Godyla take a closer look at the latest in threat intelligence, security research, and data science. Our expert guests share insights into how modern security technologies are being built, how threats are evolving, and how machine learning and artificial intelligence are being used to secure the world.

Each episode will also feature an interview with one of the many experts working in Microsoft Security. Guests will share their unique path to Microsoft and the infosec field, what they love about their calling and their predictions about the future of ML and AI.

New episodes of Security Unlocked will be released twice a month with the first three episodes available today on all major podcast platforms. We will talk about specific topics in future blogs and provide links to podcasts to get more in-depth.

Episode 1: Going ‘deep’ to identify attacks, and Holly Stewart

Listen here.

Guests: Arie Agranonik and Holly Stewart

Blog referenced: Seeing the big picture: Deep learning-based fusion of behavior signals for threat detection

In this episode, Nic and Natalia invited Arie Agranonik, Senior Data Scientist at Microsoft, to better understand how we’re using deep learning models to look at behavioral signals and identify malicious process trees. In their chat, Arie explains the differences and use cases for techniques such as deep learning, neural networks, and transfer learning.

Nic and Natalia also speak with Holly Stewart, Principal Research Manager at Microsoft, to learn how, and when, to use machine learning, best practices for building an awesome security research team, and the power of diversity in security.

Episode 2: Unmasking threats with AMSI and ML, and Dr. Josh Neil

Listen here.

Guests: Ankit Garg, Geoff McDonald, and Dr. Josh Neil

Blog referenced: Stopping Active Directory attacks and other post-exploitation behavior with AMSI and machine learning

In this episode, members of the Microsoft Defender ATP Research team chat about how the antimalware scripting interface (AMSI) and machine learning are stopping active directory attacks.

They’re also joined by Josh Neil, Principal Data Science Manager at Microsoft, as he discusses his path from music to mathematics, one definition of “artificial intelligence,” and the importance of combining multiple weak signals to gain a comprehensive view of an attack.

Episode 3: Behavior-based protection for the under-secured, and Dr. Karen Lavi

Listen here.

Guests: Hardik Suri and Dr. Karen Lavi

Blog referenced: Defending Exchange servers under attack

In this episode, Nic and Natalia chat with Hardik Suri on the importance of keeping servers up-to-date and how behavior-based monitoring is helping protect under-secured Exchange servers.

Dr. Karen Lavi, Senior Data Scientist Lead at Microsoft, joins the discussion to talk about commonalities between neuroscience and cybersecurity, her unique path to Microsoft (Teaser: She started in the Israeli Defense Force and later got her PhD in neuroscience), and her predictions on the future of AI.

Please join us monthly on the Microsoft Security Blog for new episodes. If you have feedback on how we can improve the podcast or suggestions for topics to cover in future episodes, please email us at [email protected], or talk to us on our @MSFTSecurity Twitter handle.

And don’t forget to subscribe to Security Unlocked.

Posted on Leave a comment

Web of Trust, Part 1: Concept

Every day we rely on technologies who nobody can fully understand. Since well before the industrial revolution, complex and challenging tasks required an approach that broke out the different parts into smaller scale tasks. Each resulting in specialized knowledge used in some parts of our lives, leaving other parts to trust in skills that others had learned. This shared knowledge approach also applies to software. Even the most avid readers of this magazine, will likely not compile and validate every piece of code they run. This is simply because the world of computers is itself also too big for one person to grasp.

Still, even though it is nearly impossible to understand everything that happens within your PC when you are using it, that does not leave you blind and unprotected. FLOSS software shares trust, giving protection to all users, even if individual users can’t grasp all parts in the system. This multi-part article will discuss how this ‘Web of Trust’ works and how you can get involved.

But first we’ll have to take a step back and discuss the basic concepts, before we can delve into the details and the web. Also, a note before we start, security is not just about viruses and malware. Security also includes your privacy, your economic stability and your technological independence.

One-Way System

By their design, computers can only work and function in the most rudimentary ways of logic: True or false. And or Or. This (boolean logic) is not readily accessible to humans, therefore we must do something special. We write applications in a code that we can (reasonably) comprehend (human readable). Once completed, we turn this human readable code into a code that the computer can comprehend (machine code).

The step of conversion is called compilation and/or building, and it’s a one-way process. Compiled code (machine code) is not really understandable by humans, and it takes special tools to study in detail. You can understand small chunks, but on the whole, an entire application becomes a black box.

This subtle difference shifts power. Power, in this case being the influence of one person over another person. The person who has written the human-readable version of the application and then releases it as compiled code to use by others, knows all about what the code does, while the end user knows a very limited scope. When using (software) in compiled form, it is impossible to know for certain what an application is intended to do, unless the original human readable code can be viewed.

The Nature of Power

Spearheaded by Richard Stallman, this shift of power became a point of concern. This discussion started in the 1980s, for this was the time that computers left the world of academia and research, and entered the world of commerce and consumers. Suddenly, that power became a source of control and exploitation.

One way to combat this imbalance of power, was with the concept of FLOSS software. FLOSS Software is built on 4-Freedoms, which gives you a wide array of other ‘affiliated’ rights and guarantees. In essence, FLOSS software uses copyright-licensing as a form of moral contract, that forces software developers not to leverage the one-way power against their users. The principle way of doing this, is with the the GNU General Public Licenses, which Richard Stallman created and has since been promoting.

One of those guarantees, is that you can see the code that should be running on your device. When you get a device using FLOSS software, then the manufacturer should provide you the code that the device is using, as well as all instructions that you need to compile that code yourself. Then you can replace the code on the device with the version you can compile yourself. Even better, if you compare the version you have with the version on the device, you can see if the device manufacturer tried to cheat you or other customers.

This is where the web of Trust comes back into the picture. The Web of Trust implies that even if the vast majority of people can’t validate the workings of a device, that others can do so on their behalf. Journalists, security analysts and hobbyists, can do the work that others might be unable to do. And if they find something, they have the power to share their findings.

Security by Blind Trust

This is of course, if the application and all components underneath it, are FLOSS. Proprietary software, or even software which is merely Open Source, has compiled versions that nobody can recreate and validate. Thus, you can never truly know if that software is secure. It might have a backdoor, it might sell your personal data, or it might be pushing a closed ecosystem to create a vendor-lock. With closed-source software, your security is as good as the company making the software is trustworthy.

For companies and developers, this actually creates another snare. While you might still care about your users and their security, you’re a liability: If a criminal can get to your official builds or supply-chain, then there is no way for anybody to discover that afterwards. An increasing number of attacks do not target users directly, but instead try to get in, by exploiting the trust the companies/developers have carefully grown.

You should also not underestimate pressure from outside: Governments can ask you to ignore a vulnerability, or they might even demand cooperation. Investment firms or shareholders, may also insist that you create a vendor-lock for future use. The blind trust that you demand of your users, can be used against you.

Security by a Web of Trust

If you are a user, FLOSS software is good because others can warn you when they find suspicious elements. You can use any FLOSS device with minimal economic risk, and there are many FLOSS developers who care for your privacy. Even if the details are beyond you, there are rules in place to facilitate trust.

If you are a tinkerer, FLOSS is good because with a little extra work, you can check the promises of others. You can warn people when something goes wrong, and you can validate the warnings of others. You’re also able to check individual parts in a larger picture. The libraries used by FLOSS applications, are also open for review: It’s “Trust all the way down”.

For companies and developers, FLOSS is also a great reassurance that your trust can’t be easily subverted. If malicious actors wish to attack your users, then any irregularity can quickly be spotted. Last but not least, since you also stand to defend your customers economic well-being and privacy, you can use that as an important selling point to customers who care about their own security.

Fedora’s case

Fedora embraces the concept of FLOSS and it stands strong to defend it. There are comprehensive legal guidelines, and Fedora’s principles are directly referencing the 4-Freedoms: Freedom, Friends, Features, and First

Fedora's Foundation logo, with Freedom highlighted. Illustrative.

To this end, entire systems have been set up to facilitate this kind of security. Fedora works completely in the open, and any user can check the official servers. Koji is the name of the Fedora Buildsystem, and you can see every application and it’s build logs there. For added security, there is also Bohdi, which orchestrates the deployment of an application. Multiple people must approve it, before the application can become available.

This creates the Web of Trust on which you can rely. Every package in the repository goes through the same process, and at every point somebody can intervene. There are also escalation systems in place to report issues, so that issues can quickly be tackled when they occur. Individual contributors also know that they can be reviewed at every time, which itself is already enough of a precaution to dissuade mischievous thoughts.

You don’t have to trust Fedora (implicitly), you can get something better; trust in users like you.

Posted on Leave a comment

Sophisticated new Android malware marks the latest evolution of mobile ransomware

Attackers are persistent and motivated to continuously evolve – and no platform is immune. That is why Microsoft has been working to extend its industry-leading endpoint protection capabilities beyond Windows. The addition of mobile threat defense into these capabilities means that Microsoft Defender for Endpoint (previously Microsoft Defender Advanced Threat Protection) now delivers protection on all major platforms.

Microsoft’s mobile threat defense capabilities further enrich the visibility that organizations have on threats in their networks, as well as provide more tools to detect and respond to threats across domains and across platforms. Like all of Microsoft’s security solutions, these new capabilities are likewise backed by a global network of threat researchers and security experts whose deep understanding of the threat landscape guide the continuous innovation of security features and ensure that customers are protected from ever-evolving threats.

For example, we found a piece of a particularly sophisticated Android ransomware with novel techniques and behavior, exemplifying the rapid evolution of mobile threats that we have also observed on other platforms. The mobile ransomware is the latest variant of a ransomware family that’s been in the wild for a while but has been evolving non-stop. This ransomware family is known for being hosted on arbitrary websites and circulated on online forums using various social engineering lures, including masquerading as popular apps, cracked games, or video players. The new variant caught our attention because it’s an advanced malware with unmistakable malicious characteristic and behavior and yet manages to evade many available protections, registering a low detection rate against security solutions.

As with most Android ransomware, this new threat doesn’t actually block access to files by encrypting them. Instead, it blocks access to devices by displaying a screen that appears over every other window, such that the user can’t do anything else. The said screen is the ransom note, which contains threats and instructions to pay the ransom.

Screenshot of mobile ransom note in Russian language

Figure 1. Sample ransom note used by older ransomware variants

What’s innovative about this ransomware is how it displays its ransom note. In this blog, we’ll detail the innovative ways in which this ransomware surfaces its ransom note using Android features we haven’t seen leveraged by malware before, as well as incorporating an open-source machine learning module designed for context-aware cropping of its ransom note.

New scheme, same goal

In the past, Android ransomware used a special permission called “SYSTEM_ALERT_WINDOW” to display their ransom note. Apps that have this permission can draw a window that belongs to the system group and can’t be dismissed. No matter what button is pressed, the window stays on top of all other windows. The notification was intended to be used for system alerts or errors, but Android threats misused it to force the attacker-controlled UI to fully occupy the screen, blocking access to the device. Attackers create this scenario to persuade users to pay the ransom so they can gain back access to the device.

To catch these threats, security solutions used heuristics that focused on detecting this behavior. Google later implemented platform-level changes that practically eliminated this attack surface. These changes include:

  1. Removing the SYSTEM_ALERT_WINDOW error and alert window types, and introducing a few other types as replacement
  2. Elevating the permission status of SYSTEM_ALERT_WINDOW to special permission by putting it into the “above dangerous” category, which means that users have to go through many screens to approve apps that ask for permission, instead of just one click
  3. Introducing an overlay kill switch on Android 8.0 and later that users can activate anytime to deactivate a system alert window

To adapt, Android malware evolved to misusing other features, but these aren’t as effective. For example, some strains of ransomware abuse accessibility features, a method that could easily alarm users because accessibility is a special permission that requires users to go through several screens and accept a warning that the app will be able to monitor activity via accessibility services. Other ransomware families use infinite loops of drawing non-system windows, but in between drawing and redrawing, it’s possible for users to go to settings and uninstall the offending app.

The new Android ransomware variant overcomes these barriers by evolving further than any Android malware we’ve seen before. To surface its ransom note, it uses a series of techniques that take advantage of the following components on Android:

  1. The “call” notification, among several categories of notifications that Android supports, which requires immediate user attention.
  2. The “onUserLeaveHint()” callback method of the Android Activity (i.e., the typical GUI screen the user sees) is called as part of the activity lifecycle when the activity is about to go into the background as a result of user choice, for example, when the user presses the Home key.

The malware connects the dots and uses these two components to create a special type of notification that triggers the ransom screen via the callback.

Screenshot of malware code

Figure 2. The notification with full intent and set as “call’ category

As the code snippet shows, the malware creates a notification builder and then does the following:

  1. setCategory(“call”) – This means that the notification is built as a very important notification that needs special privilege.
  2. setFullScreenIntent() – This API wires the notification to a GUI so that it pops up when the user taps on it. At this stage, half the job is done for the malware. However, the malware wouldn’t want to depend on user interaction to trigger the ransomware screen, so, it adds another functionality of Android callback:

Figure 3. The malware overriding onUserLeaveHint

As the code snippet shows, the malware overrides the onUserLeaveHint() callback function of Activity class. The function onUserLeaveHint() is called whenever the malware screen is pushed to background, causing the in-call Activity to be automatically brought to the foreground. Recall that the malware hooked the RansomActivity intent with the notification that was created as a “call” type notification. This creates a chain of events that triggers the automatic pop-up of the ransomware screen without doing infinite redraw or posing as system window.

Machine learning module indicates continuous evolution

As mentioned, this ransomware is the latest variant of a malware family that has undergone several stages of evolution. The knowledge graph below shows the various techniques this ransomware family has been seen using, including abusing the system alert window, abusing accessibility features, and, more recently, abusing notification services.

Knowledge graph showing techniques used by the Android rasomware family

Figure 4. Knowledge graph of techniques used by ransomware family

This ransomware family’s long history tells us that its evolution is far from over. We expect it to churn out new variants with even more sophisticated techniques. In fact, recent variants contain code forked from an open-source machine learning module used by developers to automatically resize and crop images based on screen size, a valuable function given the variety of Android devices.

The frozen TinyML model is useful for making sure images fit the screen without distortion. In the case of this ransomware, using the model would ensure that its ransom note—typically fake police notice or explicit images supposedly found on the device—would appear less contrived and more believable, increasing the chances of the user paying for the ransom.

The library that uses tinyML is not yet wired to the malware’s functionalities, but its presence in the malware code indicates the intention to do so in future variants. We will continue to monitor this ransomware family to ensure customers are protected and to share our findings and insights to the community for broad protection against these evolving mobile threats.

Protecting organizations from threats across domains and platforms

Mobile threats continue to rapidly evolve, with attackers continuously attempting to sidestep technological barriers and creatively find ways to accomplish their goal, whether financial gain or finding an entry point to broader network compromise.

This new mobile ransomware variant is an important discovery because the malware exhibits behaviors that have not been seen before and could open doors for other malware to follow. It reinforces the need for comprehensive defense powered by broad visibility into attack surfaces as well as domain experts who track the threat landscape and uncover notable threats that might be hiding amidst massive threat data and signals.

Microsoft Defender for Endpoint on Android, now generally available, extends Microsoft’s industry-leading endpoint protection to Android. It detects this ransomware (AndroidOS/MalLocker.B), as well as other malicious apps and files using cloud-based protection powered by deep learning and heuristics, in addition to content-based detection. It also protects users and organizations from other mobile threats, such as mobile phishing, unsafe network connections, and unauthorized access to sensitive data. Learn more about our mobile threat defense capabilities in Microsoft Defender for Endpoint on Android.

Malware, phishing, and other threats detected by Microsoft Defender for Endpoint are reported to the Microsoft Defender Security Center, allowing SecOps to investigate mobile threats along with endpoint signals from Windows and other platforms using Microsoft Defender for Endpoint’s rich set of tools for detection, investigation, and response.

Threat data from endpoints are combined with signals from email and data, identities, and apps in Microsoft 365 Defender (previously Microsoft Threat Protection), which orchestrates detection, prevention, investigation, and response across domains, providing coordinated defense. Microsoft Defender for Endpoint on Android further enriches organizations’ visibility into malicious activity, empowering them to comprehensively prevent, detect, and respond to against attack sprawl and cross-domain incidents.

Technical analysis

Obfuscation

On top of recreating ransomware behavior in ways we haven’t seen before, the Android malware variant uses a new obfuscation technique unique to the Android platform. One of the tell-tale signs of an obfuscated malware is the absence of code that defines the classes declared in the manifest file.

Malware code showing manifest file

Figure 5. Manifest file

The classes.dex has implementation for only two classes:

  1. The main application class gCHotRrgEruDv, which is involved when the application opens
  2. A helper class that has definition for custom encryption and decryption

This means that there’s no code corresponding to the services declared in the manifest file: Main Activity, Broadcast Receivers, and Background. How does the malware work without code for these key components? As is characteristic for obfuscated threats, the malware has encrypted binary code stored in the Assets folder:

Screenshot of Assets folder with encrypted executable code

Figure 6. Encrypted executable code in Assets folder

When the malware runs for the first time, the static block of the main class is run. The code is heavily obfuscated and made unreadable through name mangling and use of meaningless variable names:

Figure 7. Static block

Decryption with a twist

The malware uses an interesting decryption routine: the string values passed to the decryption function do not correspond to the decrypted value, they correspond to junk code to simply hinder analysis.

On Android, an Intent is a software mechanism that allows users to coordinate the functions of different Activities to achieve a task. It’s a messaging object that can be used to request an action from another app component.

The Intent object carries a string value as “action” parameter. The malware creates an Intent inside the decryption function using the string value passed as the name for the Intent. It then decrypts a hardcoded encrypted value and sets the “action” parameter of the Intent using the setAction API. Once this Intent object is generated with the action value pointing to the decrypted content, the decryption function returns the Intent object to the callee. The callee then invokes the getAction method to get the decrypted content.

Figure 8. Decryption function using the Intent object to pass the decrypted value

Payload deployment

Once the static block execution is complete, the Android Lifecycle callback transfers the control to the OnCreate method of the main class.

Malware code showing onCreate method

Figure 9. onCreate method of the main class decrypting the payload

Next, the malware-defined function decryptAssetToDex (a meaningful name we assigned during analysis) receives the string “CuffGmrQRT” as the first argument, which is the name of the encrypted file stored in the Assets folder.

Malware code showing decryption of assets

Figure 10. Decrypting the assets

After being decrypted, the asset turns into the .dex file. This is a notable behavior that is characteristic of this ransomware family.

Comparison of code of Asset file before and after decryption

Figure 11. Asset file before and after decryption

Once the encrypted executable is decrypted and dropped in the storage, the malware has the definitions for all the components it declared in the manifest file. It then starts the final detonator function to load the dropped .dex file into memory and triggers the main payload.

Malware code showing loading of decrypted dex file

Figure 12. Loading the decrypted .dex file into memory and triggering the main payload

Main payload

When the main payload is loaded into memory, the initial detonator hands over the control to the main payload by invoking the method XoqF (which we renamed to triggerInfection during analysis) from the gvmthHtyN class (renamed to PayloadEntry).

Malware code showing handover from initial module to main payload

Figure 13. Handover from initial module to the main payload

As mentioned, the initial handover component called triggerInfection with an instance of appObj and a method that returns the value for the variable config.

Malware code showing definition of populateConfigMap

Figure 14. Definition of populateConfigMap, which loads the map with values

Correlating the last two steps, one can observe that the malware payload receives the configuration for the following properties:

  1. number – The default number to be send to the server (in case the number is not available from the device)
  2. api – The API key
  3. url – The URL to be used in WebView to display on the ransom note

The malware saves this configuration to the shared preferences of the app data and then it sets up all the Broadcast Receivers. This action registers code components to get notified when certain system events happen. This is done in the function initComponents.

Malware code showing initializing broadcast receiver

Figure 15. Initializing the BroadcastReceiver against system events

From this point on, the malware execution is driven by callback functions that are triggered on system events like connectivity change, unlocking the phone, elapsed time interval, and others.

Dinesh Venkatesan

Microsoft Defender Research

Posted on Leave a comment

Why we invite security researchers to hack Azure Sphere

Fighting the security battle so our customers don’t have to

IoT devices are becoming more prevalent in almost every aspect of our lives—we will rely on them in our homes, our businesses, as well as our infrastructure. In February, Microsoft announced the general availability of Azure Sphere, an integrated security solution for IoT devices and equipment. General availability means that we are ready to provide OEMs and organizations with quick and cost-effective device security at scale. However, securing those devices does not stop once we put them into the hands of our customers. It is only the start of a continual battle between the attackers and the defenders.

Building a solution that customers can trust requires investments before and after deployment by complementing up-front technical measures with ongoing practices to find and mitigate risks. In April, we highlighted Azure Sphere’s approach to risk management and why securing IoT is not a one-and-done. Products improve over time, but so do hackers, as well as their skills and tools. New security threats continue to evolve, and hackers invent new ways to attack devices. So, what does it take to stay ahead?

As a Microsoft security product team, we believe in finding and fixing vulnerabilities before the bad guys do. While Azure Sphere continuously invests in code improvements, fuzzing, and other processes of quality control, it often requires the creative mindset of an attacker to expose a potential weakness that otherwise might be missed. Better than trying to think like a hacker is working with them. This is why we operate an ongoing program of red team exercises with security researchers and the hacker community: to benefit from their unique expertise and skill set. That includes being able to test our security promise not just against yesterday’s and today’s, but against even tomorrow’s attacks on IoT devices before they become known more broadly. Our recent Azure Sphere Security Research Challenge, which concluded on August 31, is a reflection of this commitment.

Partnering with MSRC to design a unique challenge

Our goal with the three-month Azure Sphere Security Research Challenge was twofold: to drive new high-impact security research, and to validate Azure Sphere’s security promise against the best challengers in their field. To do so, we partnered with the Microsoft Security Response Center (MSRC) and invited some of the world’s best researchers and security vendors to try to break our device by using the same kinds of attacks as any malicious actor might. To make sure participants had everything they needed to be successful, we provided each researcher with a dev kit, a direct line to our OS Security Engineering Team, access to weekly office hours, and email support in addition to our publicly available operating system kernel source code.

Our goal was to focus the research on the highest impact on customer security, which is why we provided six research scenarios with additional rewards of up to 20 percent on top of the Azure Bounty (up to $40,000), as well as $100,000 for two high-priority scenarios proving the ability to execute code in Microsoft Pluton or in Secure World. We received more than 3,500 applications, which is a testament to the strong interest of the research community in securing IoT. More information on the design of the challenge and our collaboration with MSRC can be found here on their blog post.

Researchers identify high impact vulnerabilities before hackers

The quality of submissions from participants in the challenge far exceeded our expectations. Several participants helped us find multiple potentially high impact vulnerabilities in Azure Sphere. The quality is a testament to the expertise, determination, and the diligence of the participants. Over the course of the challenge, we received a total of 40 submissions, of which 30 led to improvements in our product. Sixteen were bounty-eligible; adding up to a total of $374,300 in bounties awarded. The other 10 submissions identified known areas where potential risk is specifically mitigated in another part of the system—something often referred to in the field as “by design.” The high ratio of valid submissions to total submissions speaks to the extremely high quality of the research demonstrated by the participants.

Graph showing the submission breakdown and the total amount of money eligible to be received through the bounty system.

Jewell Seay, Azure Sphere Operating System Platform Security Lead, has shared detailed information of many of the cases in three recent blog posts describing the security improvements delivered in our 20.07, 20.08, and 20.09 releases. Cisco Talos and McAfee Advanced Threat Research (ATR), in particular, found several important vulnerabilities, and one particular attack chain is highlighted in Jewell’s 20.07 blog.

While the described attack required physical access to a device and could not be executed remotely, it exposed potential weaknesses spanning both cloud and device components of our product. The attack included a potential zero-day exploit in the Linux kernel to escape root privileges. The vulnerability was reported to the Linux kernel security team, leading to a fix for the larger open source community which was shared with the Linux community. If you would like to learn more and get an inside view of the challenge from two of our research partners, we highly recommend McAfee ATR’s blog post and whitepaper, or Cisco Talos’ blog post.

What it takes to provide renewable and improving security

With Azure Sphere, we provide our customers with a robust defense based on the Seven Properties of Highly Secured Devices. One of the properties, renewable security, ensures that a device can update to a more secure state—even if it has been compromised. While this is essential, it is not sufficient on its own. An organization must be equipped with the resources, people, and processes that allow for a quick resolution before vulnerabilities impact customers. Azure Sphere customers know that they have the strong commitment of our Azure Sphere Engineering team—that our team is searching for and addressing potential vulnerabilities, even from the most recently invented attack techniques.

We take this commitment to heart, as evidenced by all the fixes that went into our 20.07, 20.08, and 20.09 releases. In less than 30 days of McAfee reporting the attack chain to us, we shipped a fix to all of our customers, without the need for them to take any action due to how Azure Sphere manages updates. Although we received a high number of submissions throughout multiple release cycles, we prioritized analyzing every single report as soon as we received it. The success of our challenge should not just be measured by the number and quality of the reports, but also by how quickly reported vulnerabilities were fixed in the product. When it came to fixing the found vulnerabilities, there was no distinction made between the ones that were proven to be exploited or the ones that were only theoretical. Attackers get creative, and hope is not part of our risk assessment or our commitment to our customers.

Our engagement with the security research community

On behalf of the entire team and our customers, we would like to thank all participants for their help in making Azure Sphere more secure! We were genuinely impressed by the quality and number of high impact vulnerabilities that they found. In addition, we would also like to thank the MSRC team for partnering with us on this challenge.

Our goal is to continue to engage with this community on behalf of our customers going forward, and we will continue to review every potential vulnerability report for Azure Sphere for eligibility under the Azure Bounty Program awards.

Our team learned a lot throughout this challenge, and we will explore and announce additional opportunities to collaborate with the security research community in the future. Protecting our platform and the devices our customers build and deploy on it is a key priority for us. Working with the best security researchers in the field, we will continue to invest in finding potential vulnerabilities before the bad guys do—so you don’t have to!

If you are interested in learning more about how Azure Sphere can help you securely unlock your next IoT innovation:

Posted on Leave a comment

Recover your files from Btrfs snapshots

As you have seen in a previous article, Btrfs snapshots are a convenient and fast way to make backups. Please note that these articles do not suggest that you avoid backup software or well-tested backup plans. Their goals are to show a great feature of this file system, snapshots, and to inspire curiosity and invite you to explore, experiment and deepen the subject. Read on for more about how to recover your files from Btrfs snapshots.

A subvolume for your project

Let’s assume that you want to save the documents related to a project inside the directory $HOME/Documents/myproject.

As you have seen, a Btrfs subvolume, as well as a snapshot, looks like a normal directory. Why not use a Btrfs subvolume for your project, in order to take advantage of snapshots? To create the subvolume, use this command:

btrfs subvolume create $HOME/Documents/myproject

You can create an hidden directory where to arrange your snapshots:

mkdir $HOME/.snapshots

As you can see, in this case there’s no need to use sudo. However, sudo is still needed to list the subvolumes, and to use the send and receive commands.

Now you can start writing your documents. Each day (or each hour, or even minute) you can take a snapshot just before you start to work:

btrfs subvolume snapshot -r $HOME/Documents/myproject $HOME/.snapshots/myproject-day1

For better security and consistency, and if you need to send the snapshot to an external drive as shown in the previous article, remember that the snapshot must be read only, using the -r flag.

Note that in this case, a snapshot of the /home subvolume will not snapshot the $HOME/Documents/myproject subvolume.

How to recover a file or a directory

In this example let’s assume a classic error: you deleted a file by mistake. You can recover it from the most recent snapshot, or recover an older version of the file from an older snapshot. Do you remember that a snapshot appears like a regular directory? You can simply use the cp command to restore the deleted file:

cp $HOME/.snapshots/myproject-day1/filename.odt $HOME/Documents/myproject

Or restore an entire directory:

cp -r $HOME/.snapshots/myproject-day1/directory $HOME/Documents/myproject

What if you delete the entire $HOME/Documents/myproject directory (actually, the subvolume)? You can recreate the subvolume as seen before, and again, you can simply use the cp command to restore the entire content from the snapshot:

btrfs subvolume create $HOME/Documents/myproject
cp -rT $HOME/.snapshots/myproject-day1 $HOME/Documents/myproject

Or you could restore the subvolume by using the btrfs snapshot command (yes, a snapshot of a snapshot):

btrfs subvolume snapshot $HOME/.snapshots/myproject-day1 $HOME/Documents/myproject

How to recover btrfs snapshots from an external drive

You can use the cp command even if the snapshot resides on an external drive. For instance:

cp /run/media/user/mydisk/bk/myproject-day1/filename.odt $HOME/Documents/myproject

You can restore an entire snapshot as well. In this case, since you will use the send and receive commands, you must use sudo. In addition, consider that the restored subvolume will be created as read only. Therefore you need to also set the read only property to false:

sudo btrfs send /run/media/user/mydisk/bk/myproject-day1 | sudo btrfs receive $HOME/Documents/
mv Documents/myproject-day1 Documents/myproject
btrfs property set Documents/myproject ro false

Here’s an extra explanation. The command btrfs subvolume snapshot will create an exact copy of a subvolume in the same device. The destination has to reside in the same btrfs device. You can’t use another device as the destination of the snapshot. In that case you need to take a snapshot and use the send and receive commands.

For more information, refer to some of the online documentation:

man btrfs-subvolume
man btrfs-send
man btrfs-receive
Posted on Leave a comment

CVP Ann Johnson on implementing a Zero Trust security model for Microsoft’s remote workforce

Zero Trust has always been key to maintaining business continuity. And now, it’s become even more important during the COVID-19 pandemic to helping enable the largest remote workforce in history. While organizations are empowering people to work securely when, where, and how they want, we have found the most successful are the ones who are also empathetic to the end-user experience. At Microsoft, we refer to this approach as digital empathy. As you take steps to protect a mobile workforce, a Zero Trust strategy grounded in digital empathy will help enhance cybersecurity, along with productivity and collaboration too.

This was one of a few important topics that I recently discussed during a cybersecurity fireside chat with industry thought leader, Kelly Bissell, Global Managing Director of Security Accenture. Accenture, one of Microsoft’s most strategic partners, helps clients use Microsoft 365 to implement a Zero Trust strategy that is inclusive of everyone. “How do we make working from home both convenient and secure for employees during this time of constant change and disruption,” has become a common question both Kelly and I hear from organizations as we discuss the challenges of maintaining business continuity while adapting to this new world—and beyond. I encourage everyone to explore these points more deeply by watching my entire conversation with Kelly.

Our long-term Microsoft-Accenture security relationship helps customers navigate the current environment and emerge even stronger as we look past the pandemic. The following are some of the key steps shared during our conversation that you can take to begin applying digital empathy and Zero Trust to your organization.

Protect your identities with Azure Active Directory

Zero Trust is an “assume breach” security posture that treats each request for access as a unique risk to be evaluated and verified. This starts with strong identity authentication. Azure Active Directory (Azure AD) is an identity and secure access management (IAM) solution that you can connect to all your apps including Microsoft apps, non-Microsoft cloud apps, and on-premises apps. Employees sign in once using a single set of credentials, simplifying access. To make it even easier for users, deploy Azure AD solutions like passwordless authentication, which eliminates the need for users to memorize passwords. Multi-factor authentication (MFA) is one of the most important things you can do to help secure employee accounts, so implement MFA for 100 percent of your users, 100 percent of the time.

According to a new Forrester report, The Total Economic Impact™ of Securing Apps with Microsoft Azure Active Directory, customers who secure apps with Microsoft Azure Active Directory can improve user productivity, reduce costs, and gain IT efficiencies to generate a 123 % return on investment.

Secure employee devices

Devices present another opportunity for bad actors to infiltrate your organization. Employees may run old operating systems or download vulnerable apps on their personal devices. With Microsoft Endpoint Manager, you can guide employees to keep their devices updated. Conditional Access policies allow you to limit or block access to devices that are unknown or don’t comply with your security policies.

An endpoint detection and response (EDR) solution like Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP) can help you detect attacks and automatically block sophisticated malware. Each Microsoft Defender ATP license covers up to five devices per user.

Discover and manage cloud apps

Cloud apps have proliferated in today’s workplace. They are so easy to use that IT departments are often not aware of which cloud apps their employees access. Microsoft Cloud App Security is a cloud app security broker (CASB) that allows you to discover all the apps used in your network. Cloud App Security’s risk catalog includes over 16,000 apps that are assessed using over 80 risk factors. Once you understand the risk profile of the apps in your network, you can decide whether to allow access, block access, or onboard it on to Azure AD.

Employees are busy in the best of times. Today, with many working from home for the first time—often in a full house—their stress may be compounded. By simplifying the sign-in process and protecting data on apps and devices, Microsoft 356 security solutions like Azure AD, Microsoft Defender ATP, and Cloud App Security, make it easier for employees to work remotely while improving security for the organization.

Digital empathy and Zero Trust are also two of the five security paradigm shifts that will lead to more inclusive user experiences. Next month, I will provide more details about two additional paradigm shifts, the diversity of data, and integrated security solutions.

CTA: To learn more about Microsoft Security solutions visit our website.  Bookmark the Security blog to keep up with our expert coverage on security matters. Follow Ann Johnson @ajohnsocyber for Microsoft’s latest cybersecurity investments and @MSFTSecurity for the latest news and updates on cybersecurity.