Posted on Leave a comment

Microsoft works with researchers to detect and protect against new RDP exploits

On November 2, 2019, security researcher Kevin Beaumont reported that his BlueKeep honeypot experienced crashes and was likely being exploited. Microsoft security researchers collaborated with Beaumont as well as another researcher, Marcus Hutchins, to investigate and analyze the crashes and confirm that they were caused by a BlueKeep exploit module for the Metasploit penetration testing framework.

BlueKeep is what researchers and the media call CVE-2019-0708, an unauthenticated remote code execution vulnerability in Remote Desktop Services on Windows 7, Windows Server 2008, and Windows Server 2008 R2. Microsoft released a security fix for the vulnerability on May 14, 2019.

While similar vulnerabilities have been abused by worm malware in the past, initial attempts at exploiting this vulnerability involved human operators aiming to penetrate networks via exposed RDP services.

Microsoft had already deployed a behavioral detection for the BlueKeep Metasploit module in early September, so Microsoft Defender ATP customers had protection from this Metasploit module by the time it was used against Beaumont’s honeypot. The module, which appears to be unstable as evidenced by numerous RDP-related crashes observed on the honeypot, triggered the behavioral detection in Microsoft Defender ATP, resulting in the collection of critical signals used during the investigation.

Microsoft security signals showed an increase in RDP-related crashes that are likely associated with the use of the unstable BlueKeep Metasploit module on certain sets of vulnerable machines. We saw:

  • An increase in RDP service crashes from 10 to 100 daily starting on September 6, 2019, when the Metasploit module was released
  • A similar increase in memory corruption crashes starting on October 9, 2019
  • Crashes on external researcher honeypots starting on October 23, 2019

Figure 1. Increase in RDP-related service crashes when the Metasploit module was released

Coin miner campaign using BlueKeep exploit

After extracting indicators of compromise and pivoting to various related signal intelligence, Microsoft security researchers found that an earlier coin mining campaign in September used a main implant that contacted the same command-and-control infrastructure used during the October BlueKeep Metasploit campaign, which, in cases where the exploit did not cause the system to crash, was also observed installing a coin miner. This indicated that the same attackers were likely responsible for both coin mining campaigns—they have been actively staging coin miner attacks and eventually incorporated the BlueKeep exploit into their arsenal.

Our machine learning models flagged the presence of the coin miner payload used in these attacks on machines in France, Russia, Italy, Spain, Ukraine, Germany, the United Kingdom, and many other countries.

Figure 2. Geographic distribution of coin miner encounters

​These attacks were likely initiated as port scans for machines with vulnerable internet-facing RDP services. Once attackers found such machines, they used the BlueKeep Metasploit module to run a PowerShell script that eventually downloaded and launched several other encoded PowerShell scripts.

Figure 3. Techniques and components used in initial attempts to exploit BlueKeep

We pieced together the behaviors of the PowerShell scripts using mostly memory dumps. The following script activities have also been discussed in external researcher blogs:

  1. Initial script downloaded another encoded PowerShell script from an attacker-controlled remote server (5.135.199.19) hosted somewhere in France via port 443.
  2. The succeeding script downloaded and launched a series of three to four other encoded PowerShell scripts.
  3. The final script eventually downloaded the coin miner payload from another attacker-controlled server (109.176.117.11) hosted in Great Britain.
  4. Apart from downloading the payload, the final script also created a scheduled task to ensure the coin miner stayed persistent.​

Figure 4. Memory dump of a PowerShell script used in the attacks

The final script saved the coin miner as the following file:

C:\Windows\System32\spool\svchost.exe

The coin miner connected to command-and-control infrastructure at 5.100.251.106 hosted in Israel. Other coin miners deployed in earlier campaigns that did not exploit BlueKeep also connected to this same IP address.

Defending enterprises against BlueKeep

Security signals and forensic analysis show that the BlueKeep Metasploit module caused crashes in some cases, but we cannot discount enhancements that will likely result in more effective attacks. In addition, while there have been no other verified attacks involving ransomware or other types of malware as of this writing, the BlueKeep exploit will likely be used to deliver payloads more impactful and damaging than coin miners.

The new exploit attacks show that BlueKeep will be a threat as long as systems remain unpatched, credential hygiene is not achieved, and overall security posture is not kept in check. Customers are encouraged to identify and update vulnerable systems immediately. Many of these unpatched devices could be unmonitored RDP appliances placed by suppliers and other third-parties to occasionally manage customer systems. Because BlueKeep can be exploited without leaving obvious traces, customers should also thoroughly inspect systems that might already be infected or compromised.

To this end, Microsoft customers can use the rich capabilities in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP) to gain visibility on exploit activities and defend networks against attacks. On top of the behavior-based antivirus and endpoint detection and response (EDR) detections, we released a threat analytics report to help security operations teams to conduct investigations specific to this threat. We also wrote advanced hunting queries that customers can use to look for multiple components of the attack.


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft Defender ATP community.

Read all Microsoft security intelligence blog posts.

Follow us on Twitter @MsftSecIntel.

Posted on Leave a comment

CVP Ann Johnson: How to balance compliance and security with limited resources

Today, many organizations still struggle to adhere to General Data Protection Regulation (GDPR) mandates even though this landmark regulation took effect nearly two years ago. A key learning for some: being compliant does not always mean you are secure. Shifting privacy regulations, combined with limited resources like budgets and talent shortages, add to today’s business complexities. I hear this concern time and again as I travel around the world meeting with our customers to share how Microsoft can empower organizations successfully through these challenges.

Most recently, I sat down with Emma Smith, Global Security Director at Vodafone Group to talk about their own best practices when navigating the regulatory environment. Vodafone Group is a global company with mobile operations in 24 countries and partnerships that extend to 42 more. The company also operates fixed broadband operations in 19 markets, with about 700 million customers. This global reach means they must protect a significant amount of data while adhering to multiple requirements.

Emma and her team have put a lot of time and effort into the strategies and tactics that keep Vodafone and its customers compliant no matter where they are in the world. They’ve learned a lot in this process, and she shared these learnings with me as we discussed the need for organizations to be both secure and compliant, in order to best serve our customers and maintain their trust. You can watch our conversation and hear more in our CISO Spotlight episode.

Cybersecurity enables privacy compliance

As you work to balance compliance with security keep in mind that, as Emma said, “There is no privacy without security.” If you have separate teams for privacy and security, it’s important that they’re strategically aligned. People only use technology and services they trust, which is why privacy and security go hand in hand.

Vodafone did a security and privacy assessment across all their big data stores to understand where the high-risk data lives and how to protect it. They were then able to implement the same controls for privacy and security. It’s also important to recognize that you will never be immune from an attack, but you can reduce the damage.

Emma offered three recommendations for balancing security with privacy compliance:

  • Develop a risk framework so you can prioritize your efforts.
  • Communicate regularly with the board and executive team to align on risk appetite.
  • Establish the right security capabilities internally and/or through a mix of partners and third parties.

I couldn’t agree more, as these are also important building blocks for any organization as they work to become operationally resilient.

I also asked Emma for her top five steps for becoming compliant with privacy regulations:

  • Comply with international standards first, then address local rules.
  • Develop a clear, board-approved strategy.
  • Measure progress against your strategy.
  • Develop a prioritized program of work with clear outcomes.
  • Stay abreast of new technologies and new threats.

The simplest way to manage your risk is to minimize the amount of data that you store. Privacy assessments will help you know where the data is and how to protect it. Regional and local laws can provide tools to guide your standards. Protecting online privacy and personal data is a big responsibility, but with a risk management approach, you can go beyond the “letter of the law” to better safeguard data and support online privacy as a human right.

Learn more

Watch my conversation with Emma about balancing security with privacy compliance. To learn more about compliance and GDPR, read Microsoft Cloud safeguards individual privacy.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

CISO Spotlight Series

Address cybersecurity challenges head-on with 10-minute video episodes that discuss cybersecurity problems and solutions from AI to Zero Trust.

Watch today

Posted on Leave a comment

Cloning a MAC address to bypass a captive portal

If you ever attach to a WiFi system outside your home or office, you often see a portal page. This page may ask you to accept terms of service or some other agreement to get access. But what happens when you can’t connect through this kind of portal? This article shows you how to use NetworkManager on Fedora to deal with some failure cases so you can still access the internet.

How captive portals work

Captive portals are web pages offered when a new device is connected to a network. When the user first accesses the Internet, the portal captures all web page requests and redirects them to a single portal page.

The page then asks the user to take some action, typically agreeing to a usage policy. Once the user agrees, they may authenticate to a RADIUS or other type of authentication system. In simple terms, the captive portal registers and authorizes a device based on the device’s MAC address and end user acceptance of terms. (The MAC address is a hardware-based value attached to any network interface, like a WiFi chip or card.)

Sometimes a device doesn’t load the captive portal to authenticate and authorize the device to use the location’s WiFi access. Examples of this situation include mobile devices and gaming consoles (Switch, Playstation, etc.). They usually won’t launch a captive portal page when connecting to the Internet. You may see this situation when connecting to hotel or public WiFi access points.

You can use NetworkManager on Fedora to resolve these issues, though. Fedora will let you temporarily clone the connecting device’s MAC address and authenticate to the captive portal on the device’s behalf. You’ll need the MAC address of the device you want to connect. Typically this is printed somewhere on the device and labeled. It’s a six-byte hexadecimal value, so it might look like 4A:1A:4C:B0:38:1F. You can also usually find it through the device’s built-in menus.

Cloning with NetworkManager

First, open nm-connection-editor, or open the WiFI settings via the Settings applet. You can then use NetworkManager to clone as follows:

  • For Ethernet – Select the connected Ethernet connection. Then select the Ethernet tab. Note or copy the current MAC address. Enter the MAC address of the console or other device in the Cloned MAC address field.
  • For WiFi – Select the WiFi profile name. Then select the WiFi tab. Note or copy the current MAC address. Enter the MAC address of the console or other device in the Cloned MAC address field.

Bringing up the desired device

Once the Fedora system connects with the Ethernet or WiFi profile, the cloned MAC address is used to request an IP address, and the captive portal loads. Enter the credentials needed and/or select the user agreement. The MAC address will then get authorized.

Now, disconnect the WiFi or Ethernet profile, and change the Fedora system’s MAC address back to its original value. Then boot up the console or other device. The device should now be able to access the Internet, because its network interface has been authorized via your Fedora system.

This isn’t all that NetworkManager can do, though. For instance, check out this article on randomizing your system’s hardware address for better privacy.

Posted on Leave a comment

Threat experts on demand: Now available to assist organizations with their security investigations

Microsoft Threat Experts is the managed threat hunting service within Microsoft Defender Advanced Threat Protection (ATP) that includes two capabilities: targeted attack notifications and experts on demand.

Today, we are extremely excited to share that experts on demand is now generally available and gives customers direct access to real-life Microsoft threat analysts to help with their security investigations.

With experts on demand, Microsoft Defender ATP customers can engage directly with Microsoft security analysts to get guidance and insights needed to better understand, prevent, and respond to complex threats in their environments. This capability was shaped through partnership with multiple customers across various verticals by investigating and helping mitigate real-world attacks. From deep investigation of machines that customers had a security concern about, to threat intelligence questions related to anticipated adversaries, experts on demand extends and supports security operations teams.

The other Microsoft Threat Experts capability, targeted attack notifications, delivers alerts that are tailored to organizations and provides as much information as can be quickly delivered to bring attention to critical threats in their network, including the timeline, scope of breach, and the methods of intrusion. Together, the two capabilities make Microsoft Threat Experts a comprehensive managed threat hunting solution that provides an additional layer of expertise and optics for security operations teams.

Experts on the case

By design, the Microsoft Threat Experts service has as many use cases as there are unique organizations with unique security scenarios and requirements. One particular case showed how an alert in Microsoft Defender ATP led to informed customer response, aided by a targeted attack notification that progressed to an experts on demand inquiry, resulting in the customer fully remediating the incident and improving their security posture.

In this case, Microsoft Defender ATP endpoint protection capabilities recognized a new malicious file in a single machine within an organization. The organization’s security operations center (SOC) promptly investigated the alert and developed the suspicion it may indicate a new campaign from an advanced adversary specifically targeting them.

Microsoft Threat Experts, who are constantly hunting on behalf of this customer, had independently spotted and investigated the malicious behaviors associated with the attack. With knowledge about the adversaries behind the attack and their motivation, Microsoft Threat Experts sent the organization a bespoke targeted attack notification, which provided additional information and context, including the fact that the file was related to an app that was targeted in a documented cyberattack.

To create a fully informed path to mitigation, experts pointed to information about the scope of compromise, relevant indicators of compromise, and a timeline of observed events, which showed that the file executed on the affected machine and proceeded to drop additional files. One of these files attempted to connect to a command-and-control server, which could have given the attackers direct access to the organization’s network and sensitive data. Microsoft Threat Experts recommended full investigation of the compromised machine, as well as the rest of the network for related indicators of attack.

Based on the targeted attack notification, the organization opened an experts on demand investigation, which allowed the SOC to have a line of communication and consultation with Microsoft Threat Experts. Microsoft Threat Experts were able to immediately confirm the attacker attribution the SOC had suspected. Using Microsoft Defender ATP’s rich optics and capabilities, coupled with intelligence on the threat actor, experts on demand validated that there were no signs of second-stage malware or further compromise within the organization. Since, over time, Microsoft Threat Experts had developed an understanding of this organization’s security posture, they were able to share that the initial malware infection was the result of a weak security control: allowing users to exercise unrestricted local administrator privilege.

Experts on demand in the current cybersecurity climate

On a daily basis, organizations have to fend off the onslaught of increasingly sophisticated attacks that present unique security challenges in security: supply chain attacks, highly targeted campaigns, hands-on-keyboard attacks. With Microsoft Threat Experts, customers can work with Microsoft to augment their security operations capabilities and increase confidence in investigating and responding to security incidents.

Now that experts on demand is generally available, Microsoft Defender ATP customers have an even richer way of tapping into Microsoft’s security experts and get access to skills, experience, and intelligence necessary to face adversaries.

Experts on demand provide insights into attacks, technical guidance on next steps, and advice on risk and protection. Experts can be engaged directly from within the Microsoft Defender Security Center, so they are part of the existing security operations experience:

We are happy to bring experts on demand within reach of all Microsoft Defender ATP customers. Start your 90-day free trial via the Microsoft Defender Security Center today.

Learn more about Microsoft Defender ATP’s managed threat hunting service here: Announcing Microsoft Threat Experts.

Posted on Leave a comment

Build a virtual private network with Wireguard

Wireguard is a new VPN designed as a replacement for IPSec and OpenVPN. Its design goal is to be simple and secure, and it takes advantage of recent technologies such as the Noise Protocol Framework. Some consider Wireguard’s ease of configuration akin to OpenSSH. This article shows you how to deploy and use it.

It is currently in active development, so it might not be the best for production machines. However, Wireguard is under consideration to be included into the Linux kernel. The design has been formally verified,* and proven to be secure against a number of threats.

When deploying Wireguard, keep your Fedora Linux system updated to the most recent version, since Wireguard does not have a stable release cadence.

Set the timezone

To check and set your timezone, first display current time information:

timedatectl

Then if needed, set the correct timezone, for example to Europe/London.

timedatectl set-timezone Europe/London

Note that your system’s real time clock (RTC) may continue to be set to UTC or another timezone.

Install Wireguard

To install, enable the COPR repository for the project and then install with dnf, using sudo:

$ sudo dnf copr enable jdoss/wireguard
$ sudo dnf install wireguard-dkms wireguard-tools

Once installed, two new commands become available, along with support for systemd:

  • wg: Configuration of wireguard interfaces
  • wg-quick Bringing up the VPN tunnels

Create the configuration directory for Wireguard, and apply a umask of 077. A umask of 077 allows read, write, and execute permission for the file’s owner (root), but prohibits read, write, and execute permission for everyone else.

mkdir /etc/wireguard
cd /etc/wireguard
umask 077

Generate Key Pairs

Generate the private key, then derive the public key from it.

$ wg genkey > /etc/wireguard/privkey
$ wg pubkey < /etc/wireguard/privkey > /etc/wireguard/publickey

Alternatively, this can be done in one go:

wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey

There is a vanity address generator, which might be of interest to some. You can also generate a pre-shared key to provide a level of quantum protection:

wg genpsk > psk

This will be the same value for both the server and client, so you only need to run the command once.

Configure Wireguard server and client

Both the client and server have an [Interface] option to specify the IP address assigned to the interface, along with the private keys.

Each peer (server and client) has a [Peer] section containing its respective PublicKey, along with the PresharedKey. Additionally, this block can list allowed IP addresses which can use the tunnel.

Server

A firewall rule is added when the interface is brought up, along with enabling masquerading. Make sure to note the /24 IPv4 address range within Interface, which differs from the client. Edit the /etc/wireguard/wg0.conf file as follows, using the IP address for your server for Address, and the client IP address in AllowedIPs.

[Interface]
Address = 192.168.2.1/24, fd00:7::1/48
PrivateKey = <SERVER_PRIVATE_KEY>
PostUp = firewall-cmd --zone=public --add-port 51820/udp && firewall-cmd --zone=public --add-masquerade
PostDown = firewall-cmd --zone=public --remove-port 51820/udp && firewall-cmd --zone=public --remove-masquerade
ListenPort = 51820 [Peer]
PublicKey = <CLIENT_PUBLIC_KEY>
PresharedKey = LpI+UivLx1ZqbzjyRaWR2rWN20tbBsOroNdNnjKLMQ=
AllowedIPs = 192.168.2.2/32, fd00:7::2/48

Allow forwarding of IP packets by adding the following to /etc/sysctl.conf:

net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1

Load the new settings:

$ sysctl -p

Forwarding will be preserved after a reboot.

Client

The client is very similar to the server config, but has an optional additional entry of PersistentKeepalive set to 30 seconds. This is to prevent NAT from causing issues, and depending on your setup might not be needed. Setting AllowedIPs to 0.0.0.0/0 will forward all traffic over the tunnel. Edit the client’s /etc/wireguard/wg0.conf file as follows, using your client’s IP address for Address and the server IP address at the Endpoint.

[Interface]
Address = 192.168.2.2/32, fd00:7::2/48
PrivateKey = <CLIENT_PRIVATE_KEY> [Peer]
PublicKey = <SERVER_PUBLIC_KEY>
PresharedKey = LpI+UivLx1ZqbzjyRaWR2rWN20tbBsOroNdNnjWKLM=
AllowedIPs = 0.0.0.0/0, ::/0 Endpoint = <SERVER_IP>:51820
PersistentKeepalive = 30

Test Wireguard

Start and check the status of the tunnel on both the server and client:

$ systemctl start wg-quick@wg0
$ systemctl status wg-quick@wg0

To test the connections, try the following:

ping google.com
ping6 ipv6.google.com

Then check external IP addresses:

dig +short myip.opendns.com @resolver1.opendns.com
dig +short -6 myip.opendns.com aaaa @resolver1.ipv6-sandbox.opendns.com

* “Formally verified,” in this sense, means that the design has been proved to have mathematically correct messages and key secrecy, forward secrecy, mutual authentication, session uniqueness, channel binding, and resistance against replay, key compromise impersonation, and denial of server attacks.


Photo by Black Zheng on Unsplash.

Posted on Leave a comment

We can’t assume there are ‘threat-free’ environments. How companies can move to a Zero Trust model

Digital transformation has made the traditional perimeter-based network defense obsolete. Your employees and partners expect to be able to collaborate and access organizational resources from anywhere, on virtually any device, without impacting their productivity. Customers expect personalized experiences that demonstrate you understand them and can adapt quickly to their evolving interests. Companies need to be able to move with agility, adapting quickly to changing market conditions and take advantage of new opportunities. Companies embracing this change are thriving, leaving those who don’t in their wake.

As organizations drive their digital transformation efforts, it quickly becomes clear that the approach to securing the enterprise needs to be adapted to the new reality. The security perimeter is no longer just around the on-premises network. It now extends to SaaS applications used for business critical workloads, hotel and coffee shop networks your employees are using to access corporate resources while traveling, unmanaged devices your partners and customers are using to collaborate and interact with, and IoT devices installed throughout your corporate network and inside customer locations. The traditional perimeter-based security model is no longer enough.

The traditional firewall (VPN security model) assumed you could establish a strong perimeter, and then trust that activities within that perimeter were “safe.” The problem is today’s digital estates typically consist of services and endpoints managed by public cloud providers, devices owned by employees, partners, and customers, and web-enabled smart devices that the traditional perimeter-based model was never built to protect. We’ve learned from both our own experience, and the customers we’ve supported in their own journeys, that this model is too cumbersome, too expensive, and too vulnerable to keep going.

We can’t assume there are “threat free” environments. As we digitally transform our companies, we need to transform our security model to one which assumes breach, and as a result, explicitly verifies activities and automatically enforces security controls using all available signal and employs the principle of least privilege access. This model is commonly referred to as “Zero Trust.”

Today, we’re publishing a new white paper to help you understand the core principles of Zero Trust along with a maturity model, which breaks down requirements across the six foundational elements, to help guide your digital transformation journey.

Download the Microsoft Zero Trust Maturity Model today!

Learn more about Zero Trust and Microsoft Security.

Also, bookmark the Security blog to keep up with our expert coverage on security matters. And follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

To learn more about how you can protect your time and empower your team, check out the cybersecurity awareness page this month.

Posted on Leave a comment

Using SSH port forwarding on Fedora

You may already be familiar with using the ssh command to access a remote system. The protocol behind ssh allows terminal input and output to flow through a secure channel. But did you know that you can also use ssh to send and receive other data securely as well? One way is to use port forwarding, which allows you to connect network ports securely while conducting your ssh session. This article shows you how it works.

About ports

A standard Linux system has a set of network ports already assigned, from 0-65535. Your system reserves ports up to 1023 for system use. In many systems you can’t elect to use one of these low-numbered ports. Quite a few ports are commonly expected to run specific services. You can find these defined in your system’s /etc/services file.

You can think of a network port like a physical port or jack to which you can connect a cable. That port may connect to some sort of service on the system, like wiring behind that physical jack. An example is the Apache web server (also known as httpd). The web server usually claims port 80 on the host system for HTTP non-secure connections, and 443 for HTTPS secure connections.

When you connect to a remote system, such as with a web browser, you are also “wiring” your browser to a port on your host. This is usually a random high port number, such as 54001. The port on your host connects to the port on the remote host, such as 443 to reach its secure web server.

So why use port forwarding when you have so many ports available? Here are a couple common cases in the life of a web developer.

Local port forwarding

Imagine that you are doing web development on a remote system called remote.example.com. You usually reach this system via ssh but it’s behind a firewall that allows very little additional access, and blocks most other ports. To try out your web app, it’s helpful to be able to use your web browser to point to the remote system. But you can’t reach it via the normal method of typing the URL in your browser, thanks to that pesky firewall.

Local forwarding allows you to tunnel a port available via the remote system through your ssh connection. The port appears as a local port on your system (thus “local forwarding.”)

Let’s say your web app is running on port 8000 on the remote.example.com box. To locally forward that system’s port 8000 to your system’s port 8000, use the -L option with ssh when you start your session:

$ ssh -L 8000:localhost:8000 remote.example.com

Wait, why did we use localhost as the target for forwarding? It’s because from the perspective of remote.example.com, you’re asking the host to use its own port 8000. (Recall that any host usually can refer to itself as localhost to connect to itself via a network connection.) That port now connects to your system’s port 8000. Once the ssh session is ready, keep it open, and you can type http://localhost:8000 in your browser to see your web app. The traffic between systems now travels securely over an ssh tunnel!

If you have a sharp eye, you may have noticed something. What if we used a different hostname than localhost for the remote.example.com to forward? If it can reach a port on another system on its network, it usually can forward that port just as easily. For example, say you wanted to reach a MariaDB or MySQL service on the db.example.com box also on the remote network. This service typically runs on port 3306. So you could forward it with this command, even if you can’t ssh to the actual db.example.com host:

$ ssh -L 3306:db.example.com:3306 remote.example.com

Now you can run MariaDB commands against your localhost and you’re actually using the db.example.com box.

Remote port forwarding

Remote forwarding lets you do things the opposite way. Imagine you’re designing a web app for a friend at the office, and want to show them your work. Unfortunately, though, you’re working in a coffee shop, and because of the network setup, they can’t reach your laptop via a network connection. However, you both use the remote.example.com system at the office and you can still log in there. Your web app seems to be running well on port 5000 locally.

Remote port forwarding lets you tunnel a port from your local system through your ssh connection, and make it available on the remote system. Just use the -R option when you start your ssh session:

$ ssh -R 6000:localhost:5000 remote.example.com

Now when your friend inside the corporate firewall runs their browser, they can point it at http://remote.example.com:6000 and see your work. And as in the local port forwarding example, the communications travel securely over your ssh session.

By default the sshd daemon running on a host is set so that only that host can connect to its remote forwarded ports. Let’s say your friend wanted to be able to let people on other example.com corporate hosts see your work, and they weren’t on remote.example.com itself. You’d need the owner of the remote.example.com host to add one of these options to /etc/ssh/sshd_config on that box:

GatewayPorts yes # OR
GatewayPorts clientspecified

The first option means remote forwarded ports are available on all the network interfaces on remote.example.com. The second means that the client who sets up the tunnel gets to choose the address. This option is set to no by default.

With this option, you as the ssh client must still specify the interfaces on which the forwarded port on your side can be shared. Do this by adding a network specification before the local port. There are several ways to do this, including the following:

$ ssh -R *:6000:localhost:5000 # all networks
$ ssh -R 0.0.0.0:6000:localhost:5000 # all networks
$ ssh -R 192.168.1.15:6000:localhost:5000 # single network
$ ssh -R remote.example.com:6000:localhost:5000 # single network

Other notes

Notice that the port numbers need not be the same on local and remote systems. In fact, at times you may not even be able to use the same port. For instance, normal users may not to forward onto a system port in a default setup.

In addition, it’s possible to restrict forwarding on a host. This might be important to you if you need tighter security on a network-connected host. The PermitOpen option for the sshd daemon controls whether, and which, ports are available for TCP forwarding. The default setting is any, which allows all the examples above to work. To disallow any port fowarding, choose none, or choose only a specific host:port setting to permit. For more information, search for PermitOpen in the manual page for sshd daemon configuration:

$ man sshd_config

Finally, remember port forwarding only happens as long as the controlling ssh session is open. If you need to keep the forwarding active for a long period, try running the session in the background using the -N option. Make sure your console is locked to prevent tampering while you’re away from it.

Posted on Leave a comment

New Secured-core PC requirements designed to protect against targeted firmware attacks

Recent developments in security research and real-world attacks demonstrate that as more protections are proactively built into the OS and in connected services, attackers are looking for other avenues of exploitation with firmware emerging as a top target. In the last three years alone, NIST’s National Vulnerability Database has shown nearly a five-fold increase in the number of firmware vulnerabilities discovered.

To combat threats specifically targeted at the firmware and operating system levels, we’re announcing a new initiative we’ve been working on with partners to design what we call Secured-core PCs. These devices, created in partnership with our PC manufacturing and silicon partners, meet a specific set of device requirements that apply the security best practices of isolation and minimal trust to the firmware layer, or the device core, that underpins the Windows operating system. These devices are designed specifically for industries like financial services, government and healthcare, and for workers that handle highly-sensitive IP, customer or personal data, including PII as these are higher value targets for nation-state attackers.

In late 2018, security researchers discovered that hacking group, Strontium has been using firmware vulnerabilities to target systems in the wild with malware delivered through a firmware attack. As a result, the malicious code was hard to detect and difficult to remove – it could persist even across common cleanup procedures like an OS re-install or a hard drive replacement.

Why attackers and researchers are devoting more effort toward firmware

Firmware is used to initialize the hardware and other software on the device and has a higher level of access and privilege than the hypervisor and operating system kernel thereby making it an attractive target for attackers. Attacks targeting firmware can undermine mechanisms like secure boot and other security functionality implemented by the hypervisor or operating system making it more difficult to identify when a system or user has been compromised. Compounding the problem is the fact that endpoint protection and detection solutions have limited visibility at the firmware layer given that they run underneath of the operating system, making evasion easier for attackers going after firmware.

What makes a Secured-core PC?

Secured-core PCs combine identity, virtualization, operating system, hardware and firmware protection to add another layer of security underneath the operating system. Unlike software-only security solutions, Secured-core PCs are designed to prevent these kinds of attacks rather than simply detecting them. Our investments in Windows Defender System Guard and Secured-core PC devices are designed to provide the rich ecosystem of Windows 10 devices with uniform assurances around the integrity of the launched operating system and verifiable measurements of the operating system launch to help mitigate against threats taking aim at the firmware layer. These requirements enable customers to boot securely, protect the device from firmware vulnerabilities, shield the operating system from attacks, prevent unauthorized access to devices and data, and ensure that identity and domain credentials are protected.

The built-in measurements can be used by SecOps and IT admins to remotely monitor the health of their systems using System Guard runtime attestation and implement a zero-trust network rooted in hardware. This advanced firmware security works in concert with other Windows features to ensure that Secured-core PCs provide comprehensive protections against modern threats.

Removing trust from the firmware

Starting with Windows 8, we introduced Secure Boot to mitigate the risk posed by malicious bootloaders and rootkits that relied on Unified Extensible Firmware Interface (UEFI) firmware to only allow properly signed bootloaders like the Windows boot manager to execute. This was a significant step forward to protect against these specific types of attacks. However, since firmware is already trusted to verify the bootloaders, Secure Boot on its own does not protect from threats that exploit vulnerabilities in the trusted firmware. That’s why we worked with our partners to ensure these new Secured-core capabilities are shipped in devices right out of the box.

Using new hardware capabilities from AMD, Intel, and Qualcomm, Windows 10 now implements System Guard Secure Launch as a key Secured-core PC device requirement to protect the boot process from firmware attacks. System Guard uses the Dynamic Root of Trust for Measurement (DRTM) capabilities that are built into the latest silicon from AMD, Intel, and Qualcomm to enable the system to leverage firmware to start the hardware and then shortly after re-initialize the system into a trusted state by using the OS boot loader and processor capabilities to send the system down a well-known and verifiable code path. This mechanism helps limit the trust assigned to firmware and provides powerful mitigation against cutting-edge, targeted threats against firmware. This capability also helps to protect the integrity of the virtualization-based security (VBS) functionality implemented by the hypervisor from firmware compromise. VBS then relies on the hypervisor to isolate sensitive functionality from the rest of the OS which helps to protect the VBS functionality from malware that may have infected the normal OS even with elevated privileges. Protecting VBS is critical since it is used as a building block for important OS security capabilities like Windows Defender Credential Guard which protects against malware maliciously using OS credentials and Hypervisor-protected Code Integrity (HVCI) which ensures that a strict code integrity policy is enforced and that all kernel code is signed and verified.

Being able to measure that the device booted securely is another critical piece of this additional layer of protection from firmware compromise that gives admins added confidence that their endpoints are safe. That’s why we implemented Trusted Platform Module 2.0 (TPM) as one of the device requirements for Secured-core PCs. By using the Trusted Platform Module 2.0 (TPM) to measure the components that are used during the secure launch process, we help customers enable zero trust networks using System Guard runtime attestation. Conditional access policies can be implemented based on the reports provided by the System Guard attestation client running in the isolated VBS environment.

In addition to the Secure Launch functionality, Windows implements additional safeguards that operate when the OS is running to monitor and restrict the functionality of potentially dangerous firmware functionality accessible through System Management Mode (SMM).

Beyond the hardware protection of firmware featured in Secured-core PCs, Microsoft recommends a defense-in-depth approach including security review of code, automatic updates, and attack surface reduction. Microsoft has provided an open-source firmware project called Project-Mu that PC manufactures can use as a starting point for secure firmware.

How to get a Secured-core PC

Our ecosystem partnerships have enabled us to add this additional layer of security in devices that are designed for highly-targeted industries and end-users who handle mission-critical data in some of the most data-sensitive industries like government, financial services, and healthcare, right-out-of-the-box. These innovations build on the value of Windows 10 Pro that comes with built-in protections like firewall, secure boot, and file-level information-loss protection which are standard on every device.

More information on devices that are verified Secured-core PC including those from Dell, Dynabook, HP, Lenovo, Panasonic and Surface can be found on our web page.

David Weston (@dwizzzleMSFT)
Partner Director, OS Security

Posted on Leave a comment

Top 6 email practices to protect against phishing attacks and business compromise

Most cyberattacks start over email—a user is tricked into opening a malicious attachment, or into clicking a malicious link and divulging credentials, or into responding with confidential data. Attackers dupe victims by using carefully crafted emails to build a false sense of trust and/or urgency. And they use a variety of techniques to do this—spoofing trusted domains or brands, impersonating known users, using previously compromised contacts to launch campaigns and/or using compelling but malicious content in the email. In the context of an organization or business, every user is a target and, if compromised, a conduit for a potential breach that could prove very costly.

Whether it’s sophisticated nation-state attacks, targeted phishing schemes, business email compromise or a ransomware attacks, such attacks are on the rise at an alarming rate and are also increasing in their sophistication. It is therefore imperative that every organization’s security strategy include a robust email security solution.

So, what should IT and security teams be looking for in a solution to protect all their users, from frontline workers to the C-suite? Here are 6 tips to ensure your organization has a strong email security posture:

You need a rich, adaptive protection solution.

As security solutions evolve, bad actors quickly adapt their methodologies to go undetected. Polymorphic attacks designed to evade common protection solutions are becoming increasingly common. Organizations therefore need solutions that focus on zero-day and targeted attacks in addition to known vectors. Purely standards based or known signature and reputation-based checks will not cut it.

Solutions that include rich detonation capabilities for files and URLs are necessary to catch payload-based attacks. Advanced machine learning models that look at the content and headers of emails as well as sending patterns and communication graphs are important to thwart a wide range of attack vectors including payload-less vectors such as business email compromise. Machine learning capabilities are greatly enhanced when the signal source feeding it is broad and rich; so, solutions that boast of a massive security signal base should be preferred. This also allows the solution to learn and adapt to changing attack strategies quickly which is especially important for a rapidly changing threat landscape.

Complexity breeds challenges. An easy-to-configure-and-maintain system reduces the chances of a breach.

Complicated email flows can introduce moving parts that are difficult to sustain. As an example, complex mail-routing flows to enable protections for internal email configurations can cause compliance and security challenges. Products that require unnecessary configuration bypasses to work can also cause security gaps. As an example, configurations that are put in place to guarantee delivery of certain type of emails (eg: simulation emails), are often poorly crafted and exploited by attackers.

Solutions that protect emails (external and internal emails) and offer value without needing complicated configurations or emails flows are a great benefit to organizations. In addition, look for solutions that offer easy ways to bridge the gap between the security teams and the messaging teams. Messaging teams, motivated by the desire to guarantee mail delivery, might create overly permissive bypass rules that impact security. The sooner these issues are caught the better for overall security. Solutions that offer insights to the security teams when this happens can greatly reduce the time taken to rectify such flaws thereby reducing the chances of a costly breach

A breach isn’t an “If”, it’s a “When.” Make sure you have post-delivery detection and remediation.

No solution is 100% effective on the prevention vector because attackers are always changing their techniques. Be skeptical of any claims that suggest otherwise. Taking an ‘assume breach’ mentality will ensure that the focus is not only on prevention, but on efficient detection and response as well. When an attack does go through the defenses it is important for security teams to quickly detect the breach, comprehensively identify any potential impact and effectively remediate the threat.

Solutions that offer playbooks to automatically investigate alerts, analyze the threat, assess the impact, and take (or recommend) actions for remediations are critical for effective and efficient response. In addition, security teams need a rich investigation and hunting experience to easily search the email corpus for specific indicators of compromise or other entities. Ensure that the solution allows security teams to hunt for threats and remove them easily.
Another critical component of effective response is ensuring that security teams have a good strong signal source into what end users are seeing coming through to their inbox. Having an effortless way for end users to report issues that automatically trigger security playbooks is key.

Your users are the target. You need a continuous model for improving user awareness and readiness.

An informed and aware workforce can dramatically reduce the number of occurrences of compromise from email-based attacks. Any protection strategy is incomplete without a focus on improving the level of awareness of end users.

A core component of this strategy is raising user awareness through Phish simulations, training them on things to look out for in suspicious emails to ensure they don’t fall prey to actual attacks. Another, often overlooked, but equally critical, component of this strategy, is ensuring that the everyday applications that end-users use are helping raise their awareness. Capabilities that offer users relevant cues, effortless ways to verify the validity of URLs and making it easy to report suspicious emails within the application — all without compromising productivity — are very important.

Solutions that offer Phish simulation capabilities are key. Look for deep email-client-application integrations that allow users to view the original URL behind any link regardless of any protection being applied. This helps users make informed decisions. In addition, having the ability to offer hints or tips to raise specific user awareness on a given email or site is also important. And, effortless ways to report suspicious emails that in turn trigger automated response workflows are critical as well.

Attackers meet users where they are. So must your security.

While email is the dominant attack vector, attackers and phishing attacks will go where users collaborate and communicate and keep their sensitive information. As forms of sharing, collaboration and communication other than email, have become popular, attacks that target these vectors are increasing as well. For this reason, it is important to ensure that an organization’s anti-Phish strategy not just focus on email.

Ensure that the solution offers targeted protection capabilities for collaboration services that your organization uses. Capabilities like detonation that scan suspicious documents and links when shared are critical to protect users from targeted attacks. The ability in client applications to verify links at time-of-click offers additional protection regardless of how the content is shared with them. Look for solutions that support this capability.

Attackers don’t think in silos. Neither can the defenses.

Attackers target the weakest link in an organization’s defenses. They look for an initial compromise to get in, and once inside will look for a variety of ways increase the scope and impact of the breach. They typically achieve this by trying to compromise other users, moving laterally within the organization, elevating privileges when possible, and the finally reaching a system or data repository of critical value. As they proliferate through the organization, they will touch different endpoints, identities, mailboxes and services.

Reducing the impact of such attacks requires quick detection and response. And that can only be achieved when the defenses across these systems do not act in silos. This is why it is critical to have an integrated view into security solutions. Look for an email security solution that integrates well across other security solutions such as endpoint protection, CASB, identity protection, etc. Look for richness in integration that goes beyond signal integration, but also in terms of detection and response flows.

Posted on Leave a comment

Use sshuttle to build a poor man’s VPN

Nowadays, business networks often use a VPN (virtual private network) for secure communications with workers. However, the protocols used can sometimes make performance slow. If you can reach reach a host on the remote network with SSH, you could set up port forwarding. But this can be painful, especially if you need to work with many hosts on that network. Enter sshuttle — which lets you set up a quick and dirty VPN with just SSH access. Read on for more information on how to use it.

The sshuttle application was designed for exactly the kind of scenario described above. The only requirement on the remote side is that the host must have Python available. This is because sshuttle constructs and runs some Python source code to help transmit data.

Installing sshuttle

The sshuttle application is packaged in the official repositories, so it’s easy to install. Open a terminal and use the following command with sudo:

$ sudo dnf install sshuttle

Once installed, you may find the manual page interesting:

$ man sshuttle

Setting up the VPN

The simplest case is just to forward all traffic to the remote network. This isn’t necessarily a crazy idea, especially if you’re not on a trusted local network like your own home. Use the -r switch with the SSH username and the remote host name:

$ sshuttle -r username@remotehost 0.0.0.0/0

However, you may want to restrict the VPN to specific subnets rather than all network traffic. (A complete discussion of subnets is outside the scope of this article, but you can read more here on Wikipedia.) Let’s say your office internally uses the reserved Class A subnet 10.0.0.0 and the reserved Class B subnet 172.16.0.0. The command above becomes:

$ sshuttle -r username@remotehost 10.0.0.0/8 172.16.0.0/16

This works great for working with hosts on the remote network by IP address. But what if your office is a large network with lots of hosts? Names are probably much more convenient — maybe even required. Never fear, sshuttle can also forward DNS queries to the office with the –dns switch:

$ sshuttle --dns -r username@remotehost 10.0.0.0/8 172.16.0.0/16

To run sshuttle like a daemon, add the -D switch. This also will send log information to the systemd journal via its syslog compatibility.

Depending on the capabilities of your system and the remote system, you can use sshuttle for an IPv6 based VPN. You can also set up configuration files and integrate it with your system startup if desired. If you want to read even more about sshuttle and how it works, check out the official documentation. For a look at the code, head over to the GitHub page.


Photo by Kurt Cotoaga on Unsplash.