Posted on Leave a comment

Command line quick tips: More about permissions

A previous article covered some basics about file permissions on your Fedora system. This installment shows you additional ways to use permissions to manage file access and sharing. It also builds on the knowledge and examples in the previous article, so if you haven’t read that one, do check it out.

Symbolic and octal

In the previous article you saw how there are three distinct permission sets for a file. The user that owns the file has a set, members of the group that owns the file has a set, and then a final set is for everyone else. These permissions are expressed on screen in a long listing (ls -l) using symbolic mode.

Each set has r, w, and x entries for whether a particular user (owner, group member, or other) can read, write, or execute that file. But there’s another way to express these permissions: in octal mode.

You’re used to the decimal numbering system, which has ten distinct values (0 through 9). The octal system, on the other hand, has eight distinct values (0 through 7). In the case of permissions, octal is used as a shorthand to show the value of the r, w, and x fields. Think of each field as having a value:

  • r = 4
  • w = 2
  • x = 1

Now you can express any combination with a single octal value. For instance, read and write permission, but no execute permission, would have a value of 6. Read and execute permission only would have a value of 5. A file’s rwxr-xr-x symbolic permission has an octal value of 755.

You can use octal values to set file permissions with the chmod command similarly to symbolic values. The following two commands set the same permissions on a file:

chmod u=rw,g=r,o=r myfile1
chmod 644 myfile1

Special permission bits

There are several special permission bits also available on a file. These are called setuid (or suid), setgid (or sgid), and the sticky bit (or delete inhibit). Think of this as yet another set of octal values:

  • setuid = 4
  • setgid = 2
  • sticky = 1

The setuid bit is ignored unless the file is executable. If that’s the case, the file (presumably an app or a script) runs as if it were launched by the user who owns the file. A good example of setuid is the /bin/passwd utility, which allows a user to set or change passwords. This utility must be able to write to files no user should be allowed to change. Therefore it is carefully written, owned by the root user, and has a setuid bit so it can alter the password related files.

The setgid bit works similarly for executable files. The file will run with the permissions of the group that owns it. However, setgid also has an additional use for directories. If a file is created in a directory with setgid permission, the group owner for the file will be set to the group owner of the directory.

Finally, the sticky bit, while ignored for files, is useful for directories. The sticky bit set on a directory will prevent a user from deleting files in that directory owned by other users.

The way to set these bits with chmod in octal mode is to add a value prefix, such as 4755 to add setuid to an executable file. In symbolic mode, the u and g can be used to set or remove setuid and setgid, such as u+s,g+s. The sticky bit is set using o+t. (Other combinations, like o+s or u+t, are meaningless and ignored.)

Sharing and special permissions

Recall the example from the previous article concerning a finance team that needs to share files. As you can imagine, the special permission bits help to solve their problem even more effectively. The original solution simply made a directory the whole group could write to:

drwxrwx---. 2 root finance 4096 Jul 6 15:35 finance

One problem with this directory is that users dwayne and jill, who are both members of the finance group, can delete each other’s files. That’s not optimal for a shared space. It might be useful in some situations, but probably not when dealing with financial records!

Another problem is that files in this directory may not be truly shared, because they will be owned by the default groups of dwayne and jill — most likely the user private groups also named dwayne and jill.

A better way to solve this is to set both setgid and the sticky bit on the folder. This will do two things — cause files created in the folder to be owned by the finance group automatically, and prevent dwayne and jill from deleting each other’s files. Either of these commands will work:

sudo chmod 3770 finance
sudo chmod u+rwx,g+rwxs,o+t finance

The long listing for the file now shows the new special permissions applied. The sticky bit appears as T and not t because the folder is not searchable for users outside the finance group.

drwxrws--T. 2 root finance 4096 Jul 6 15:35 finance

Posted on Leave a comment

New machine learning model sifts through the good to unearth the bad in evasive malware

We continuously harden machine learning protections against evasion and adversarial attacks. One of the latest innovations in our protection technology is the addition of a class of hardened malware detection machine learning models called monotonic models to Microsoft Defender ATP‘s Antivirus.

Historically, detection evasion has followed a common pattern: attackers would build new versions of their malware and test them offline against antivirus solutions. They’d keep making adjustments until the malware can evade antivirus products. Attackers then carry out their campaign knowing that the malware won’t initially be blocked by AV solutions, which are then forced to catch up by adding detections for the malware. In the cybercriminal underground, antivirus evasion services are available to make this process easier for attackers.

Microsoft Defender ATP’s Antivirus has significantly advanced in becoming resistant to attacker tactics like this. A sizeable portion of the protection we deliver are powered by machine learning models hosted in the cloud. The cloud protection service breaks attackers’ ability to test and adapt to our defenses in an offline environment, because attackers must either forgo testing, or test against our defenses in the cloud, where we can observe them and react even before they begin.

Hardening our defenses against adversarial attacks doesn’t end there. In this blog we’ll discuss a new class of cloud-based ML models that further harden our protections against detection evasion.

Most machine learning models are trained on a mix of malicious and clean features. Attackers routinely try to throw these models off balance by stuffing clean features into malware.

Monotonic models are resistant against adversarial attacks because they are trained differently: they only look for malicious features. The magic is this: Attackers can’t evade a monotonic model by adding clean features. To evade a monotonic model, an attacker would have to remove malicious features.

Monotonic models explained

Last summer, researchers from UC Berkeley (Incer, Inigo, et al, “Adversarially robust malware detection using monotonic classification”, Proceedings of the Fourth ACM International Workshop on Security and Privacy Analytics, ACM, 2018) proposed applying a technique of adding monotonic constraints to malware detection machine learning models to make models robust against adversaries. Simply put, the said technique only allows the machine learning model to leverage malicious features when considering a file – it’s not allowed to use any clean features.

Figure 1. Features used by a baseline versus a monotonic constrained logistic regression classifier. The monotonic classifier does not use cleanly-weighted features so that it’s more robust to adversaries.

Inspired by the academic research, we deployed our first monotonic logistic regression models to Microsoft Defender ATP cloud protection service in late 2018. Since then, they’ve played an important part in protecting against attacks.

Figure 2 below illustrates the production performance of the monotonic classifiers versus the baseline unconstrained model. Monotonic-constrained models expectedly have lower outcome in detecting malware overall compared to classic models. However, they can detect malware attacks that otherwise would have been missed because of clean features.

Figure 2. Malware detection machine learning classifiers comparing the unconstrained baseline classifier versus the monotonic constrained classifier in customer protection.

The monotonic classifiers don’t replace baseline classifiers; they run in addition to the baseline and add additional protection. We combine all our classifiers using stacked classifier ensembles–monotonic classifiers add significant value because of the unique classification they provide.

How Microsoft Defender ATP uses monotonic models to stop adversarial attacks

One common way for attackers to add clean features to malware is to digitally code-sign malware with trusted certificates. Malware families like ShadowHammer, Kovter, and Balamid are known to abuse certificates to evade detection. In many of these cases, the attackers impersonate legitimate registered businesses to defraud certificate authorities into issuing them trusted code-signing certificates.

LockerGoga, a strain of ransomware that’s known for being used in targeted attacks, is another example of malware that uses digital certificates. LockerGoga emerged in early 2019 and has been used by attackers in high-profile campaigns that targeted organizations in the industrial sector. Once attackers are able breach a target network, they use LockerGoga to encrypt enterprise data en masse and demand ransom.

Figure 3. LockerGoga variant digitally code-signed with a trusted CA

When Microsoft Defender ATP encounters a new threat like LockerGoga, the client sends a featurized description of the file to the cloud protection service for real-time classification. An array of machine learning classifiers processes the features describing the content, including whether attackers had digitally code-signed the malware with a trusted code-signing certificate that chains to a trusted CA. By ignoring certificates and other clean features, monotonic models in Microsoft Defender ATP can correctly identify attacks that otherwise would have slipped through defenses.

Very recently, researchers demonstrated an adversarial attack that appends a large volume of clean strings from a computer game executable to several well-known malware and credential dumping tools – essentially adding clean features to the malicious files – to evade detection. The researchers showed how this technique can successfully impact machine learning prediction scores so that the malware files are not classified as malware. The monotonic model hardening that we’ve deployed in Microsoft Defender ATP is key to preventing this type of attack, because, for a monotonic classifier, adding features to a file can only increase the malicious score.

Given how they significantly harden defenses, monotonic models are now standard components of machine learning protections in Microsoft Defender ATP‘s Antivirus. One of our monotonic models uniquely blocks malware on an average of 200,000 distinct devices every month. We now have three different monotonic classifiers deployed, protecting against different attack scenarios.

Monotonic models are just the latest enhancements to Microsoft Defender ATP’s Antivirus. We continue to evolve machine learning-based protections to be more resilient to adversarial attacks. More effective protections against malware and other threats on endpoints increases defense across the entire Microsoft Threat Protection. By unifying and enabling signal-sharing across Microsoft’s security services, Microsoft Threat Protection secures identities, endpoints, email and data, apps, and infrastructure.

Geoff McDonald (@glmcdona),Microsoft Defender ATP Research team
with Taylor Spangler, Windows Data Science team


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft Defender ATP community.

Follow us on Twitter @MsftSecIntel.

Posted on Leave a comment

Manage your passwords with Bitwarden and Podman

You might have encountered a few advertisements the past year trying to sell you a password manager. Some examples are LastPass, 1Password, or Dashlane. A password manager removes the burden of remembering the passwords for all your websites. No longer do you need to re-use passwords or use easy-to-remember passwords. Instead, you only need to remember one single password that can unlock all your other passwords for you.

This can make you more secure by having one strong password instead of many weak passwords. You can also sync your passwords across devices if you have a cloud-based password manager like LastPass, 1Password, or Dashlane. Unfortunately, none of these products are open source. Luckily there are open source alternatives available.

Open source password managers

These alternatives include Bitwarden, LessPass, or KeePass. Bitwarden is an open source password manager that stores all your passwords encrypted on the server, which works the same way as LastPass, 1Password, or Dashlane. LessPass is a bit different as it focuses on being a stateless password manager. This means it derives passwords based on a master password, the website, and your username rather than storing the passwords encrypted. On the other side of the spectrum there’s KeePass, a file-based password manager with a lot of flexibility with its plugins and applications.

Each of these three apps has its own downsides. Bitwarden stores everything in one place and is exposed to the web through its API and website interface. LessPass can’t store custom passwords since it’s stateless, so you need to use their derived passwords. KeePass, a file-based password manager, can’t easily sync between devices. You can utilize a cloud-storage provider together with WebDAV to get around this, but a lot of clients do not support it and you might get file conflicts if devices do not sync correctly.

This article focuses on Bitwarden.

Running an unofficial Bitwarden implementation

There is a community implementation of the server and its API called bitwarden_rs. This implementation is fully open source as it can use SQLite or MariaDB/MySQL, instead of the proprietary Microsoft SQL Server that the official server uses.

It’s important to recognize some differences exist between the official and the unofficial version. For instance, the official server has been audited by a third-party, whereas the unofficial one hasn’t. When it comes to implementations, the unofficial version lacks email confirmation and support for two-factor authentication using Duo or email codes.

Let’s get started running the server with SELinux in mind. Following the documentation for bitwarden_rs you can construct a Podman command as follows:

$ podman run -d \ 
--userns=keep-id \
--name bitwarden \
-e SIGNUPS_ALLOWED=false \
-e ROCKET_PORT=8080 \
-v /home/egustavs/Bitwarden/bw-data/:/data/:Z \
-p 8080:8080 \
bitwardenrs/server:latest

This downloads the bitwarden_rs image and runs it in a user container under the user’s namespace. It uses a port above 1024 so that non-root users can bind to it. It also changes the volume’s SELinux context with :Z to prevent permission issues with read-write on /data.

If you host this under a domain, it’s recommended to put this server under a reverse proxy with Apache or Nginx. That way you can use port 80 and 443 which points to the container’s 8080 port without running the container as root.

Running under systemd

With Bitwarden now running, you probably want to keep it that way. Next, create a unit file that keeps the container running, automatically restarts if it doesn’t respond, and starts running after a system restart. Create this file as /etc/systemd/system/bitwarden.service:

[Unit]
Description=Bitwarden Podman container
Wants=syslog.service

[Service]
User=egustavs
Group=egustavs
TimeoutStartSec=0
ExecStart=/usr/bin/podman run 'bitwarden'
ExecStop=-/usr/bin/podman stop -t 10 'bitwarden'
Restart=always
RestartSec=30s
KillMode=none

[Install]
WantedBy=multi-user.target

Now, enable and start it using sudo:

$ sudo systemctl enable bitwarden.service && sudo systemctl start bitwarden.service
$ systemctl status bitwarden.service
bitwarden.service - Bitwarden Podman container
Loaded: loaded (/etc/systemd/system/bitwarden.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-07-09 20:23:16 UTC; 1 day 14h ago
Main PID: 14861 (podman)
Tasks: 44 (limit: 4696)
Memory: 463.4M

Success! Bitwarden is now running under system and will keep running.

Adding LetsEncrypt

It’s strongly recommended to run your Bitwarden instance through an encrypted channel with something like LetsEncrypt if you have a domain. Certbot is a bot that creates LetsEncrypt certificates for us, and they have a guide for doing this through Fedora.

After you generate a certificate, you can follow the bitwarden_rs guide about HTTPS. Just remember to append :Z to the LetsEncrypt volume to handle permissions while not changing the port.


Photo by CMDR Shane on Unsplash.

Posted on Leave a comment

Microsoft’s Threat & Vulnerability Management solution now generally available

I’m excited to announce that Microsoft’s Threat & Vulnerability Management solution is generally available as of June 30! We have been working closely with customers for more than a year to incorporate their real needs and feedback to better address vulnerability management. Our goal is to empower defenders with the tools they need to better protect against evolving threats, and we believe this solution will help provide that additional visibility and agility they need.

Threat & Vulnerability Management (TVM) is a built-in capability in Microsoft Defender Advanced Threat Protection (ATP) that uses a risk-based approach to discover, prioritize, and remediate endpoint vulnerabilities and misconfigurations. With Microsoft Defender ATP’s Threat & Vulnerability Management, customers benefit from:

  • Continuous discovery of vulnerabilities and misconfigurations
  • Prioritization based on business context and dynamic threat landscape
  • Correlation of vulnerabilities with endpoint detection and response (EDR) alerts to expose breach insights
  • Machine-level vulnerability context during incident investigations
  • Built-in remediation processes through unique integration with Microsoft Intune and Microsoft System Center Configuration Manager

Traditional vulnerability scanning only happens periodically, leaving organizations with security blind spots between scans. The one-size-fits-all approach that these traditional solutions use ignores critical business-specific context, as well as the dynamic threat landscape. This is coupled with the fact that mitigation of vulnerabilities is a manual process, often across teams, that can take days, weeks, or months to complete. This leaves a window of opportunity for attackers and puts our defenders in a tough spot.

To address these challenges Microsoft partnered with a dozen enterprise customers on the design and creation of this new Threat & Vulnerability Management solution. One of them is Telit, a global leader in IoT enablement offering end-to-end IoT solutions, including enterprise-grade hardware, connectivity, platform, and consulting services. Telit already had a well-defined vulnerability management program in place, but said they were missing several critical capabilities, including visibility, prioritization, and remediation.

Our design partners play a key role throughout the entire process, from planning and building to operationalizing and maturing the product so we can deliver the best experience. Many of our customers have existing vulnerability management programs, so we knew that to have them switch to Microsoft we would need a disruptive approach to vulnerability management. From private preview to general availability and beyond, our key goals were to bridge the gap between Security and IT roles in threat protection, to reduce time to threat resolution while enabling real-time prioritization and risk reduction based on the evolving threat landscape and business context. The team continues to incorporate feedback from customers and partners, adding these new capabilities on a monthly basis.

“Telit’s previous threat and vulnerability solutions were limited to on-premises connected endpoints. Moving to Microsoft’s TVM cloud-based solution provides us much better visibility into roaming endpoints with a continuous assessment, especially when our endpoints are connected to untrusted networks.”
— Itzik Menashe, VP of IT & Information Security, Telit

Working together with Telit, we quickly understood that the current prioritization norm is not enough to properly reduce risk in an organization. We consulted with our partners on a new risk-based approach, which is focused on continuous discovery of vulnerabilities and misconfigurations and correlated those insights with context specific to their business and the dynamic threat landscape.

Microsoft’s built-in, end-to-end remediation process helps Telit bridge the gap between their security and operations teams. The unique integration with Microsoft Intune allows their security team to create remediation requests with a click of a button, and the operations team receives the requests automatically with all relevant information and can start the remediation process right away. The security team can then watch their exposure score drop in real time as remediation progresses.

“Microsoft’s TVM provides Telit with an easy-to-use solution that incorporates strong discovery capabilities, a risk-based approach to prioritization, and an effective remediation process. With this solution we are able to cover a large number of endpoints using a very small team of security engineers.”
— Mor Asher, Global IT and Information Security Manager, Telit

The product experience and ease of implementation was a big driver for Telit and thousands of other active customers to start using Microsoft Defender ATP Threat & Vulnerability Management. Telit had Microsoft Defender ATP’s TVM up and running within seconds.

To learn more about threat and vulnerability management watch our video that walks you through the experience.

If you already have Microsoft Defender ATP, the TVM solution is now available within your ATP portal. If you would like to sign up for a trial of Microsoft Defender ATP including TVM, sign up here.

We’re excited for our customers to evaluate this new solution and are looking forward to continued feedback.

Posted on Leave a comment

New identity threat investigation experience for security analysts announced

As the modern workplace transforms, the identity attack surface area is growing exponentially, across on-premises and cloud, spanning a multitude of endpoints and applications. Security Operations (SecOps) teams are challenged to monitor user activities, suspicious or otherwise, across all dimensions of the identity attack surface, using multiple security solutions that often are not connected. Because identity protection is paramount for the modern workplace, investigating identity threats requires a single experience to monitor all user activities and hunt for suspicious behaviors in order to triage users quickly.

Today, Microsoft is announcing the new identity threat investigation experience, which correlates identity alerts and activities from Azure Advanced Threat Protection (Azure ATP), Azure Active Directory (Azure AD) Identity Protection, and Microsoft Cloud App Security into a single investigation experience for security analysts and hunters alike.

Modern identity attacks leverage hybrid cloud environments as a single attack surface

The identity threat investigation experience combines user identity signals from your on-premises and cloud services to close the gap between disparate signals in your environment and leverages state-of-the-art User and Entity Behavior Analytics (UEBA) capabilities to provide a risk score and rich contextual information for each user. It empowers security analysts to prioritize their investigations and reduce investigation times, ending the need to toggle between identity security solutions. This gives your SecOps teams more time and the right information to make better decisions and actively remediate identity threats and risks.

Azure ATP provides on-premises detections and activities with abnormal behavior analytics to assist in investigating the most at-risk users. Microsoft Cloud App Security detects and alerts security analysts to the potential of sensitive data exfiltration for first- and third-party cloud apps. And Azure AD Identity Protection detects unusual sign-in information, implementing conditional access on the compromised user until the issue is resolved. Combined, these services analyze the activities and alerts, using UEBA, to determine risky behaviors and provide you with an investigation priority score to streamline incident response for compromised identities.

To further simplify your SecOps workflows, we embedded the new experience into the Cloud App Security portal, regardless of whether you’re using Microsoft Cloud App Security today. While it enriches each alert with additional information, it also allows you to easily pivot from the correlated alert timeline directly into a deeper dive investigation and hunting experience.

User investigation priority

We’re adding a new dimension to the current investigation model that is based on the number of total alerts with a new user investigation priority, which is determined by all user activities and alerts that could indicate an active advanced attack or insider threat.

To calculate the user investigation priority, each abnormal event is scored based on the user’s profile history, their peers, and the organization. Additionally, the potential business and asset impact of any given user is analyzed to determine the investigation priority score.

The new concept is included on the updated user page, which provides relevant information about who the user is, the investigation priority score, how it compares across all users within the organization, and abnormal alerts and activities of the user.

In the image below, the user’s investigation priority score of 155 puts them in the top percentile within the organization, making them a top user for a security analyst to investigate.

Identity threat investigation user page.

The score is surfaced on the main dashboard to help you get an immediate idea of which users currently represent the highest risk within your organization and should be prioritized for further investigation.

Top users by investigation priority on the main dashboard.

Improved investigation and hunting experience

Beyond signal correlation and a redesigned user page, the new identity threat investigation experience also adds new and advanced investigation capabilities specifically for Azure ATP customers, regardless of whether you choose to use Azure AD Identity Protection and or Microsoft Cloud App Security.

These capabilities include the:

  • Ability for security analysts to perform threat hunting with greater context over both cloud and on-premises resources by leveraging advanced filtering capabilities and enriched alert information.
  • Visibility and management of Azure AD user risk levels with the ability to confirm compromised user status, which changes the Azure AD user risk level to High.
  • Creation of activity policies to determine governance actions and leverage built-in automation capabilities via the native integration with Microsoft Flow to more easily triage alerts.

New threat hunting experience to analyze alerts and activities.

Get started with the public preview today

If you’re one of the many enterprise customers already using Azure ATP, Microsoft Cloud App Security, and/or Azure AD Identity Protection and want to test the new identity threat investigation experience, get started by checking out our comprehensive technical documentation.

If you’re just starting your journey, begin a trial of Microsoft Threat Protection to experience the benefits of the most comprehensive, integrated, and secure threat protection solution for the modern workplace.

We would love your feedback! Find us on the Azure ATP Tech Community and send us your questions or feedback on the new experience.

Posted on Leave a comment

Securing telnet connections with stunnel

Telnet is a client-server protocol that connects to a remote server through TCP over port 23. Telnet does not encrypt data and is considered insecure and passwords can be easily sniffed because data is sent in the clear. However there are still legacy systems that need to use it. This is where stunnel comes to the rescue.

Stunnel is designed to add SSL encryption to programs that have insecure connection protocols. This article shows you how to use it, with telnet as an example.

Server Installation

Install stunnel along with the telnet server and client using sudo:

sudo dnf -y install stunnel telnet-server telnet

Add a firewall rule, entering your password when prompted:

firewall-cmd --add-service=telnet --perm
firewall-cmd --reload

Next, generate an RSA private key and an SSL certificate:

openssl genrsa 2048 > stunnel.key
openssl req -new -key stunnel.key -x509 -days 90 -out stunnel.crt

You will be prompted for the following information one line at a time. When asked for Common Name you must enter the correct host name or IP address, but everything else you can skip through by hitting the Enter key.

You are about to be asked to enter information that will be
incorporated into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:
Email Address []

Merge the RSA key and SSL certificate into a single .pem file, and copy that to the SSL certificate directory:

cat stunnel.crt stunnel.key > stunnel.pem
sudo cp stunnel.pem /etc/pki/tls/certs/

Now it’s time to define the service and the ports to use for encrypting your connection. Choose a port that is not already in use. This example uses port 450 for tunneling telnet. Edit or create the /etc/stunnel/telnet.conf file:

cert = /etc/pki/tls/certs/stunnel.pem
sslVersion = TLSv1
chroot = /var/run/stunnel
setuid = nobody
setgid = nobody
pid = /stunnel.pid
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
[telnet]
accept = 450
connect = 23

The accept option is the port the server will listen to for incoming telnet requests. The connect option is the internal port the telnet server listens to.

Next, make a copy of the systemd unit file that allows you to override the packaged version:

sudo cp /usr/lib/systemd/system/stunnel.service /etc/systemd/system

Edit the /etc/systemd/system/stunnel.service file to add two lines. These lines create a chroot jail for the service when it starts.

[Unit]
Description=TLS tunnel for network daemons
After=syslog.target network.target

[Service]
ExecStart=/usr/bin/stunnel
Type=forking
PrivateTmp=true
ExecStartPre=-/usr/bin/mkdir /var/run/stunnel
ExecStartPre=/usr/bin/chown -R nobody:nobody /var/run/stunnel

[Install]
WantedBy=multi-user.target

Next, configure SELinux to listen to telnet on the new port you just specified:

sudo semanage port -a -t telnetd_port_t -p tcp 450

Finally, add a new firewall rule:

firewall-cmd --add-port=450/tcp --perm
firewall-cmd --reload

Now you can enable and start telnet and stunnel.

systemctl enable telnet.socket [email protected] --now

A note on the systemctl command is in order. Systemd and the stunnel package provide an additional template unit file by default. The template lets you drop multiple configuration files for stunnel into /etc/stunnel, and use the filename to start the service. For instance, if you had a foobar.conf file, you could start that instance of stunnel with systemctl start [email protected], without having to write any unit files yourself.

If you want, you can set this stunnel template service to start on boot:

systemctl enable [email protected]

Client Installation

This part of the article assumes you are logged in as a normal user (with sudo privileges) on the client system. Install stunnel and the telnet client:

dnf -y install stunnel telnet

Copy the stunnel.pem file from the remote server to your client /etc/pki/tls/certs directory. In this example, the IP address of the remote telnet server is 192.168.1.143.

sudo scp [email protected]:/etc/pki/tls/certs/stunnel.pem
/etc/pki/tls/certs/

Create the /etc/stunnel/telnet.conf file:

cert = /etc/pki/tls/certs/stunnel.pem
client=yes
[telnet]
accept=450
connect=192.168.1.143:450

The accept option is the port that will be used for telnet sessions. The connect option is the IP address of your remote server and the port it’s listening on.

Next, enable and start stunnel:

systemctl enable [email protected] --now

Test your connection. Since you have a connection established, you will telnet to localhost instead of the hostname or IP address of the remote telnet server:

[user@client ~]$ telnet localhost 450
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

Kernel 5.0.9-301.fc30.x86_64 on an x86_64 (0)
server login: myuser
Password: XXXXXXX
Last login: Sun May  5 14:28:22 from localhost
[myuser@server ~]$
Posted on Leave a comment

3 investments Microsoft is making to improve identity management

As a large enterprise with global reach, Microsoft has the same security risks as its customers. We have a distributed, mobile workforce who access corporate resources from external networks. Many individuals struggle to remember complex passwords or reuse one password across many accounts, which makes them vulnerable to attackers. As Microsoft has embraced digital transformation for our own business, we shifted to a security strategy that places strong employee identities at the center. Many of our customers are on a similar journey and may find value in our current identity management approach.

Our goal is to reduce the risk of compromised identity and empower people to be efficient and agile whether they’re on our network or not.

Our identity management solutions focus on three key areas:

Read on for more details for each of these investment areas, advice on scaling your investment to meet your budget, and a wrap-up of some key insights that can help you smoothly implement new policies.

Securing administrator accounts

Our administrators have access to Microsoft’s most sensitive data and systems, which makes them a target of attackers. To improve protection of our organization, it’s important to limit the number of people who have privileged access and implement elevated controls for when, how, and where administrator accounts can be used. This helps reduce the odds that a malicious actor will gain access.

There are three practices that we advise:

  • Secure devices—Establish a separate device for administrative tasks that is updated and patched with the most recent software and operating system. Set the security controls at high levels and prevent administrative tasks from being executed remotely.
  • Isolated identity—Issue an administrator identity from a separate namespace or forest that cannot access the internet and is different from the user’s information worker identity. Our administrators are required to use a smartcard to access this account.
  • Non-persistent access—Provide zero rights by default to administration accounts. Require that they request just-in-time (JIT) privileges that gives them access for a finite amount of time and logs it in a system.

Budget allocations may limit the amount that you can invest in these three areas; however, we still recommend that you do all three at the level that makes sense for your organization. Calibrate the level of security controls on the secure device to meet your risk profile.

Eliminating passwords

The security community has recognized for several years that passwords are not safe. Users struggle to create and remember dozens of complex passwords, and attackers excel at acquiring passwords through methods like password spray attacks and phishing. When Microsoft first explored the use of Multi-Factor Authentication (MFA) for our workforce, we issued smartcards to each employee. This was a very secure authentication method; however, it was cumbersome for employees. They found workarounds, such as forwarding work email to a personal account, that made us less safe.

Eventually we realized that eliminating passwords was a much better solution. This drove home an important lesson: as you institute policies to improve security, always remember that a great user experience is critical for adoption.

Here are steps you can take to prepare for a password-less world:

  • Enforce MFA—Conform to the fast identity online (FIDO) 2.0 standard, so you can require a PIN and a biometric for authentication rather than a password. Windows Hello is one good example, but choose the MFA method that works for your organization.
  • Reduce legacy authentication workflows—Place apps that require passwords into a separate user access portal and migrate users to modern authentication flows most of the time. At Microsoft only 10 percent of our users enter a password on a given day.
  • Remove passwords—Create consistency across Active Directory and Azure Active Directory (Azure AD) to enable administrators to remove passwords from identity directory.

Simplifying identity provisioning

We believe the most underrated identity management step you can take is to simplify identity provisioning. Set up your identities with access to exactly the right systems and tools. If you provide too much access, you put the organization at risk if the identity becomes compromised. However, under-provisioning may encourage people to request access for more than they need in order to avoid requesting permission again.

We take these two approaches:

  • Set up role-based access—Identify the systems, tools, and resources that each role needs to do their job. Establish access rules that make it easy to give a new user the right permissions when you set up their account or they change roles.
  • Establish an identity governance process—Make sure that as people move roles they don’t carry forward access they no longer need.

Establishing the right access for each role is so important that if you are only able to follow one of our recommendations focus on identity provisioning and lifecycle management.

What we learned

As you take steps to improve your identity management, keep in mind the following lessons Microsoft has learned along the way:

  • Enterprise-level cultural shifts—Getting the technology and hardware resources for a more secure enterprise can be difficult. Getting people to modify their behavior is even harder. To successfully roll out a new initiative, plan for enterprise-level cultural shifts.
  • Beyond the device—Strong identity management works hand-in-hand with healthy devices.
  • Security starts at provisioning—Don’t put governance off until later. Identity governance is crucial to ensure that companies of all sizes can audit the access privileges of all accounts. Invest early in capabilities that give the right people access to the right things at the right time.
  • User experience—We found that if you combine user experience factors with security best practices, you get the best outcome.

Learn more

For more details on how identity management fits within the overall Microsoft security framework and our roadmap forward, watch the Speaking of security: Identity management webinar.

Posted on Leave a comment

Use udica to build SELinux policy for containers

While modern IT environments move towards Linux containers, the need to secure these environments is as relevant as ever. Containers are a process isolation technology. While containers can be a defense mechanism, they only excel when combined with SELinux.

Fedora SELinux engineering built a new standalone tool, udica, to generate SELinux policy profiles for containers by automatically inspecting them. This article focuses on why udica is needed in the container world, and how it makes SELinux and containers work better together. You’ll find examples of SELinux separation for containers that let you avoid turning protection off because the generic SELinux type container_t is too tight. With udica you can easily customize the policy with limited SELinux policy writing skills.

SELinux technology

SELinux is a security technology that brings proactive security to Linux systems. It’s a labeling system that assigns a label to all subjects (processes and users) and objects (files, directories, sockets, etc.). These labels are then used in a security policy that controls access throughout the system. It’s important to mention that what’s not allowed in an SELinux security policy is denied by default. The policy rules are enforced by the kernel. This security technology has been in use on Fedora for several years. A real example of such a rule is:

allow httpd_t httpd_log_t: file { append create getattr ioctl lock open read setattr };

The rule allows any process labeled as httpd_t to create, append, read and lock files labeled as httpd_log_t. Using the ps command, you can list all processes with their labels:

$ ps -efZ | grep httpd
system_u:system_r:httpd_t:s0 root 13911 1 0 Apr14 ? 00:05:14 /usr/sbin/httpd -DFOREGROUND
...

To see which objects are labeled as httpd_log_t, use semanage:

# semanage fcontext -l | grep httpd_log_t
/var/log/httpd(/.)? all files system_u:object_r:httpd_log_t:s0
/var/log/nginx(/.)? all files system_u:object_r:httpd_log_t:s0
...

The SELinux security policy for Fedora is shipped in the selinux-policyRPM package.

SELinux vs. containers

In Fedora, the container-selinux RPM package provides a generic SELinux policy for all containers started by engines like podman or docker. Its main purposes are to protect the host system against a container process, and to separate containers from each other. For instance, containers confined by SELinux with the process type container_t can only read/execute files in /usr and write to container_file_t files type on host file system. To prevent attacks by containers on each other, Multi-Category Security (MCS) is used.

Using only one generic policy for containers is problematic, because of the huge variety of container usage. On one hand, the default container type (container_t) is often too strict. For example:

  • Fedora SilverBlue needs containers to read/write a user’s home directory
  • Fluentd project needs containers to be able to read logs in the /var/log directory

On the other hand, the default container type could be too loose for certain use cases:

  • It has no SELinux network controls — all container processes can bind to any network port
  • It has no SELinux control on Linux capabilities — all container processes can use all capabilities

There is one solution to handle both use cases: write a custom SELinux security policy for the container. This can be tricky, because SELinux expertise is required. For this purpose, the udica tool was created.

Introducing udica

Udica generates SELinux security profiles for containers. Its concept is based on the “block inheritance” feature inside the common intermediate language (CIL) supported by SELinux userspace. The tool creates a policy that combines:

  • Rules inherited from specified CIL blocks (templates), and
  • Rules discovered by inspection of container JSON file, which contains mountpoints and ports definitions

You can load the final policy immediately, or move it to another system to load into the kernel. Here’s an example, using a container that:

  • Mounts /home as read only
  • Mounts /var/spool as read/write
  • Exposes port tcp/21

The container starts with this command:

# podman run -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash

The default container type (container_t) doesn’t allow any of these three actions. To prove it, you could use the sesearch tool to query that the allow rules are present on system:

# sesearch -A -s container_t -t home_root_t -c dir -p read 

There’s no allow rule present that lets a process labeled as container_t access a directory labeled home_root_t (like the /home directory). The same situation occurs with /var/spool, which is labeled var_spool_t:

# sesearch -A -s container_t -t var_spool_t -c dir -p read

On the other hand, the default policy completely allows network access.

# sesearch -A -s container_t -t port_type -c tcp_socket
allow container_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg };
allow sandbox_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg };

Securing the container

It would be great to restrict this access and allow the container to bind just on TCP port 21 or with the same label. Imagine you find an example container using podman ps whose ID is 37a3635afb8f:

# podman ps -q
37a3635afb8f

You can now inspect the container and pass the inspection file to the udica tool. The name for the new policy is my_container.

# podman inspect 37a3635afb8f > container.json
# udica -j container.json my_container
Policy my_container with container id 37a3635afb8f created!

Please load these modules using:
# semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}

Restart the container with: "--security-opt label=type:my_container.process" parameter

That’s it! You just created a custom SELinux security policy for the example container. Now you can load this policy into the kernel and make it active. The udica output above even tells you the command to use:

# semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}

Now you must restart the container to allow the container engine to use the new custom policy:

# podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash

The example container is now running in the newly created my_container.process SELinux process type:

# ps -efZ | grep my_container.process
unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 root 2275 434 1 13:49 pts/1 00:00:00 podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash
system_u:system_r:my_container.process:s0:c270,c963 root 2317 2305 0 13:49 pts/0 00:00:00 bash

Seeing the results

The command sesearch now shows allow rules for accessing /home and /var/spool:

# sesearch -A -s my_container.process -t home_root_t -c dir -p read
allow my_container.process home_root_t:dir { getattr ioctl lock open read search };
# sesearch -A -s my_container.process -t var_spool_t -c dir -p read
allow my_container.process var_spool_t:dir { add_name getattr ioctl lock open read remove_name search write }

The new custom SELinux policy also allows my_container.process to bind only to TCP/UDP ports labeled the same as TCP port 21:

# semanage port -l | grep 21 | grep ftp
ftp_port_t tcp 21, 989, 990
# sesearch -A -s my_container.process -c tcp_socket -p name_bind
allow my_container.process ftp_port_t:tcp_socket name_bind;

Conclusion

The udica tool helps you create SELinux policies for containers based on an inspection file without any SELinux expertise required. Now you can increase the security of containerized environments. Sources are available on GitHub, and an RPM package is available in Fedora repositories for Fedora 28 and later.


Photo by Samuel Zeller on Unsplash.

Posted on Leave a comment

Oversharing and safety in the age of social media

Many years ago, I worked with healthcare organizations to install infrastructure to support the modernization of their information systems. As I traversed hospitals – both in public and private sectors – I was often struck by one particular best practice: the privacy reminders were ubiquitous. If I stepped into an elevator or walked down a hallway, there was signage to remind everyone about patient privacy. Nothing was left to chance or interpretation. This was also pre-social media, so the concerns ranged from public conversations or inappropriate use of email, to leaving a document on a public printer.

Fast forward to 2019. Our society and culture have changed. We are much freer with our personal information on social media. We talk openly about our lives and post pictures and family information in the wild. We are less concerned about our privacy, as we use these platforms to connect with others – a connection we might be denied given our busy lives. However, as has oft been written, these platforms can be a cache of riches for someone seeking to steal your identity or compromise your email and other accounts. This same type of free flow of information is also following us to other parts of our lives and making it easier for the bad guys to attack and profit. Let me explain with a few examples.

I travel a bit (okay, a lot). While my global travel is mostly for work, this provides an informative world lens for people watching and listening. I am often between flights in an airport reading or catching up on email and overhear a wide variety of conversations – without even trying. Recently, I was in the U.S., delayed at the Chicago O’Hare airport for several hours as “there is (was) weather in Chicago,” the worst phrase in the US travel industry. I overheard a man on the phone discussing his declined credit card in detail, including his full name, billing ZIP code, card number, expiration date, and so on. My shock quickly faded when I started thinking about how many other times I was in public and overheard things that could lead to financial or IP or other loss for an individual or company. The number is non-trivial. That’s when I decided to tweet some simple advice, and solicit input via my twitter feed.

The results were equally horrifying and amusing. Some even thought my post was an attempt in social engineering. Overall, the response convinced me to write a blog as the evidence I gathered suggests this isn’t a small problem. Rather, it’s a real problem. So let me start by sharing some examples and then make some suggestions (which may seem obvious to many of you) on how to protect your privacy and security.

So how do you protect yourself from theft of personal or proprietary company information in public? The super obvious, somewhat flippant answer is: don’t share any of this type of information in public. But, at times, this is easier said than done. If you travel as much as I do, it becomes impossible to refrain from conducting some confidential business whilst you are on the road. So how do you actually protect yourself?

Many people will read this blog and say, “well that’s obvious,” but sadly it is not, based on what I have personally observed and the feedback I received in preparation for this post. When in these types of situations, my recommendations are:

  • Use privacy screens on your laptop and your phone when in public, in meetings, and on airplanes. I cannot tell you how much confidential information I could have obtained just sitting behind someone on a plane.
  • Do not discuss confidential information in a public place: restaurant, club, elevator, airplane, etc. Based on the Twitter solicited feedback, people somehow think planes are cones of silence.
  • If you must conduct personal/confidential business on the road, wait until you arrive at your hotel or find a quiet place in the airport/club/restaurant where your back is to a wall and you can see anyone who is located by you. Use your best judgment.
  • Never give anyone your password. I don’t know how to say this more strongly. Do not ever give anyone your password.
  • Use a password manager. Don’t reuse passwords. This way if someone does obtain one of your passwords, you limit your exposure.
  • Be cognizant of what you put on social media. I am very active on social media but, remember, your information can and will be used against you. Be careful of when and how you post to avoid advertising when your home will be vacant for vacation or any personally identifiable information that could expose your passwords.
  • If someone calls you claiming to be from your bank, the IRS, the police, your company, a tech support organization, offer to call them back from a number that is published on their legitimate website or the back of your credit card, etc. Do not give any confidential information to an inbound caller.
  • Use encryption for sensitive data and sensitive communications.
  • If you must install IoT devices at home, segment them to a unique network.
  • If you are renting a private vacation home, there are some very good apps to scan the network to make certain you have privacy (e.g., cameras in a location that was not disclosed by the owner)
  • I am not a fan – at all – of listening devices at home, but if you do have one, remember there is a possibility we will find out all of your conversations were recorded. Be aware of what you say….

The world is quickly evolving as we embrace more technology. The onus is largely on users to protect yourselves. While this blog is just a high-level discussion on social engineering and privacy, using common sense is always your best defense.

Posted on Leave a comment

Introducing Security Policy Advisor—a new service to manage your Office 365 security policies

Securing your users has never been more important, or more difficult. For many, it’s become a scramble to simply stay ahead of the latest threats. And all too often the complexity and variety of the security solutions themselves only adds to your burden. What most people really need is someone to help shoulder the load. We hear you. And that’s why we’re taking steps to provide new, easy-to-use capabilities that support you as you protect the people, apps, devices, and data in your organizations.

Today, we’re excited to announce the public preview of Security Policy Advisorthe first in a series of security investments to further strengthen the apps in Office 365 ProPlus. Security Policy Advisor is a service that offers an easier, more effective way to manage your security policies. It provides custom policy recommendations, supported with rich data insights into how these policies would impact your group’s use of features in Officeallowing you to make decisions with full information.

Simplify policy management across devices

Earlier today, we announced the release of our new Office cloud policy service, an easy-to-use cloud-based tool that allows you to define policies for Office 365 ProPlus and assign them to users via Azure Active Directory groups. Once defined, policies are automatically enforced as individuals sign in. What’s more, Office cloud policy service extends your reach to managed and unmanaged devices without requiring any on-premises infrastructure or modern device management services. If you have a BYOD policy or users who occasionally sign in to Office 365 ProPlus from other devices, you’re covered.

Manage and monitor policy configurations with confidence

Now, we’re building on this service to help you secure your organization with confidence, taking the guesswork out of configuring security policies. In the past, the burden fell to you alone to determine if a particular policy would help or hurt a specific group. Setting macro policies, for example, involved numerous group policy objects (GPOs), each with multiple settings, detailed yet always too generic security baseline studies, and cumbersome deployment. And in the end, you still had to wait for frustrated support calls to know the user impact.

Security Policy Advisor changes the game with knowledge already available within your organization. It analyzes how individuals use Office and then recommends specific policies to boost your security profile. Even better, for each recommendation, you can see how people would be impacted, giving you greater confidence in choosing policies that are right for your environment. It may recommend, for example, disabling VBA macros in Word or macros in Excel files from the web—providing relevant threat intelligence (if available) and identifying just how frequently individuals in your group use those features and would be impacted by the policy.

When you’re ready, you can apply policies at the app, feature, or group level—all with one click.

The job doesn’t end once a policy is applied. In a dynamic workplace needs evolve, groups change, and a set of policies that worked just months ago may actually become a hinderance. Security Policy Advisor actively monitors policy impact on your employees, highlighting areas worth your attention or suggesting changes if needed. If you’ve enabled individuals to override specific policies, you’ll see how this is used. With cloud-based management, you can update or even roll back at the push of a button.

And rest assured: if you are currently using GPOs, they can run in parallel with any changes you make with the Office cloud policy service. Existing policies are retained and, if there are any conflicts, policies you apply via Office cloud policy service will always take precedence.

See what Security Policy Advisor recommends for you

Security Policy Advisor is now available in preview in English (en-us) with broad availability in coming weeks. If you’re an administrator in an organization that has deployed Office 365 ProPlus, you can start right now by signing in to the Office client management portal and configuring Office policies. For each configuration you create and assign to a group, you’ll receive recommendations with supporting data that you can review and deploy to users as a policy. Visit Tech Community for additional information and documentation.

This is just the beginning of a set of new security capabilities we’re working on for ProPlus. We’re looking forward to hearing your feedback, and we’ll have more to share with you later this year.