REDMOND, Wash. — At first glance, the gathering inside Building 99 at Microsoft this week looked like many others inside the company, as technical experts shared hard-earned lessons for using machine learning to defend against hackers.
It looked normal, that is, until you spotted the person in the blue Google shirt addressing the group, next to speakers from Salesforce, Netflix and Microsoft, at a day-long event that included representatives of Facebook, Amazon and other big cloud providers and services that would normally treat technical insights as closely guarded secrets.
As the afternoon session ended, the organizer from Microsoft, security data wrangler Ram Shankar Siva Kumar, complimented panelist Erik Bloch, the Salesforce security products and program management director, for “really channeling the Ohana spirit,” referencing the Hawaiian word for “family,” which Salesforce uses to describe its internal culture of looking out for one another.
It was almost enough to make a person forget the bitter rivalry between Microsoft and Salesforce.
Siva Kumar then gave attendees advice on finding the location of the closing reception. “You can Bing it, Google it, whatever it is,” he said, as the audience laughed at the rare concession to Microsoft’s longtime competitor.
It was no ordinary gathering at Microsoft, but then again, it’s no ordinary time in tech. The Security Data Science Colloquium brought the competitors together to focus on one of the biggest challenges and opportunities in the industry.
Machine learning, one of the key ingredients of artificial intelligence, is giving the companies new superpowers to identify and guard against malicious attacks on their increasingly cloud-oriented products and services. The problem is that hackers are using many of the same techniques to take those attacks to a new level.
“The challenge is that security is a very asymmetric game,” said Dawn Song, a UC Berkeley computer science and engineering professor who attended the event. “Defenders have to defend across the board, and attackers only need to find one hole. So in general, it’s easier for attackers to leverage these new techniques.”
That helps to explain why the competitors are teaming up.
“At this point in the development of this technology it’s really critical for us to move at speed to all collaborate,” explained Mark Russinovich, the Microsoft Azure chief technology officer. “A customer of Google is also likely a customer of Microsoft, and it does nobody any good or gives anybody a competitive disadvantage to keep somebody else’s customer, which could be our own customer, insecure. This is for the betterment of everybody, the whole community.”
[Editor’s Note: Russinovich is a keynoter at the GeekWire Cloud Tech Summit, June 27 in Bellevue, Wash.]
This spirit of collaboration is naturally more common in the security community than in the business world, but the colloquium at Microsoft has taken it to another level. GeekWire is the first media organization to go inside the event, although some presentations weren’t opened up to us, due in part to the sensitive nature of some of the information the companies shared.
The event, in its second year, grew out of informal gatherings between Microsoft and Google, which resulted in part from connections Siva Kumar made on long-distance runs with Google’s tech security experts. After getting approval from his manager, he brought one of the Google engineers to Microsoft two years ago to compare notes with his team.
Things have snowballed from there. After the first event, last year, Siva Kumar posted about the colloquium, describing it as a gathering of “security data scientists without borders.” As the word got out, additional companies asked to be involved, and Microsoft says this year’s event was attended by representatives of 17 different tech companies in addition to university researchers.
The event reflects a change in Microsoft’s culture under CEO Satya Nadella, as well as a shift in the overall industry’s approach. Of course, the companies are still business rivals that compete on the basis of beating each other’s products. But in years or decades past, many treated security as a competitive advantage, as well. That’s what has changed.
“This is not a competing thing. This is not about us trying to one up each other,” Siva Kumar said. “It just feels like, year over year, our problems are just becoming more and more similar.”
In one afternoon session this week, representatives from Netflix, one of Amazon Web Services’ marquee customers, gave detailed briefings on the streaming service’s internal machine learning tools, including its “Trainman” system for detecting and reporting unusual user activity.
Developing and improving the system has been a “humbling journey,” said Siamac Mirzaie from the Netflix Science & Analytics Team, before doing a deep dive on the technical aspects of Trainman.
Depending on the situation, he said, Netflix uses either Python, Apache Spark or Flink to bring the data into its system and append the necessary attributes to the data. It then uses simple rules, statistical models and machine learning models to detect anomalies using Flink or Spark, followed by a post-processing layer that uses a combination of Spark and Node.js. That’s followed by a program for visualizing the anomalies in a timeline that people inside the company can use to drill down into and understand specific events.
“The idea is to refine the various data anomalies that we’ve generated in the previous stage into anomalies that our application owner or security analyst can actually relate to,” Mirzaie said.
The stakes are high given the $8 billion that Netflix is expected to spend on content this year.
But the stakes might be even higher for Facebook. The social network, which has been in the international spotlight over misuse of its platform by outside companies and groups, says it uses a combination of automated and manual systems to identify fraudulent and suspicious activity.
Facebook, which held a similar event of its own in April, was among the companies that presented during the gathering at Microsoft this week. Facebook recently announced that it used new machine learning practices to detect more than 500,000 accounts tied to financial scams.
During his keynote, Microsoft’s Russinovich talked in detail about Windows PowerShell, the command-line program that is a popular tool for attackers in part because it’s built into the system. Microsoft’s Windows Defender Advanced Threat Protection is designed to detect suspicious command lines, and Microsoft was previously using a traditional model that was trained to recognize potentially malicious sequences of characters.
“That only got us so far,” Russinovich said in an interview.
After brainstorming ways to solve the problem, the company’s security defense researchers figured out how to apply deep neural networks, more commonly used in vision-based object detection, for use in PowerShell malicious script detection, as well. They essentially came up with a way to encode command lines to make them look like images to the machine learning model, Russinovich explained. The result surpassed the traditional technique “by a significant amount,” he said.
At the closing panel discussion, David Seidman, Google security engineering manager, summed up the stated philosophy of the event. “We are not trying to compete on the basis of our corporate security,” he said. “Google is not trying to get ahead of Microsoft in the cloud because Microsoft got compromised. That’s the last thing we want to see.”
“We are fighting common enemies,” Seidman added. “The same attackers are coming after all of us, and an incident at one company is going to affect that customer’s trust in all the cloud companies they do business with. So we have very much aligned interests here.”