Deep Learning's Most Dangerous Vulnerability: Adversarial Attacks
Groundbreaking theory, big data, and compute power — with this trifecta, the extraordinary advent of deep learning seems almost inevitable. It propels computer vision and real-time operational and industrial applications to new heights. Inevitably, our reliance on deep learning increases as we ride this accelerating wave of progress. And yet, this very momentum could be propelling us to a dangerous place. Deep learning's deployment exposes new and particular vulnerabilities. In this session, Luba Gloukhova will survey the various forms of adversarial attacks against neural networks that have emerged, and the state of the art methods for defense.
Luba Gloukhova leads and executes advanced machine learning projects for high tech firms and major research universities in Silicon Valley. She also preaches what she practices, serving as the founding chair of Deep Learning World – the premier conference covering the commercial deployment of deep learning – and delivering highly-rated talks at many other events as well. Luba previously supported Stanford faculty as an internal consultant at the unversity's Graduate School of Business, conceiving and generating innovative solutions to accelerate research. Before that, Luba gained industry experience in high frequency trading analysis, catastrophe risk modeling, and marketing analytics. She received her master’s in analytics from the University of San Francisco and two bachelors degrees from Berkeley: applied mathematics and economics. Luba also teaches yoga and enjoys an active lifestyle.
- Not Interested