Machine Learning Attacks – A New Era of Security Epidemic
Machine learning is increasingly being used at the core of several critical applications, such as for self-driving cars, drug recommendation systems, high-volume trading algorithms, privacy and security of sensitive data, etc; any adversarial manipulation on an ML model can lead to devastating results.
Wondering what would it be like to have your machine learning (ML) model come under security attack? Have you thought through how to monitor security attacks on your AI/ML models? Historically less attention has been paid to the ways in which AI can be used maliciously. ML models, much like any piece of software, are prone to theft and subsequent reverse-engineering. Machine learning is susceptible to adversarial activity, where an attacker can manipulate the input data to deceive the deployed ML model.
Join us to experience the research and solutions we developed to combat the ML threat. This session aims to describe the potential threats associated with current methods of collecting or building ML systems and elaborate on the techniques to protect these models. The intention is to bridge the gap between machine learning and privacy and security technologies by helping attendees get acquainted with machine learning, the potential threats to privacy, the proposed solutions, and the challenges that lie ahead.