Monday, October 2, 2017

Machine Learning and Security

This article helps dispel the "magic" of machine learning that is being hyped so much today.

There is an excellent article by in SD Times  really brings out the challenges with machine learning and is titled: 

Black Hat USA 2017: Machine learning is not a silver bullet for security

This article is well worth reading and Hyrum Anderson clearly states the challenges below:

"Hyrum Anderson, technical director of data science for cybersecurity provider Endgame, presented research on machine learning malware evasion at this week’s Black Hat USA 2017 conference in Las Vegas. 
“I want you to know I am an advocate of machine learning for its ability to detect things that have never been seen,” Anderson said. “[But] machine learning has blind spots and depending on what an attacker knows about your machine learning model, they can be really easy to exploit.”

Anderson explained, machine learning is not only just susceptible to evasion attacks, but it is susceptible to these attacks by other machine learning methods. Researchers at Endgame have learned it is not only enough to provide a cybersecurity system, they have to check and double check the product as well as test and think about how adversaries might exploit or evade them. “If an attacker has access to your machine learning model, he can actually ask it ‘What can I do to confuse you the most,’ and the model will tell them.”"

What is very interesting is what Engame is doing with open source and machine learning malware detector:

"As part of his research, Anderson is releasing a machine learning malware detector into open source as well as the framework users can use to improve the AI agent, improve the malware, or attack their own models to learn about their weaknesses. “The framework that we’re providing can be readily adapted to attack your own machine learning model. To be clear, there are easier ways to attack your machine learning model since you know everything about it. But this framework represents what we believe to be the most realistic attack that an adversary can launch and that can be used to understand your model’s blind spots,” he said.  "