Towards Trustworthy ML:
Rethinking Security and Privacy for ML
ICLR 2020 Workshop
Date: April 26, 2020 (Sunday)
Location: Online Millennium Hall, Addis Ababa, Ethiopia (co-located with ICLR 2020)
Contact: trustworthyiclr20@googlegroups.com (this will email all organizers)
How to participate?
- Register for ICLR and access the virtual workshop on ICLR's virtual conference website. This page also includes the livestream below.
- Watch the talk recordings in the livestream below or whenever you'd like using the links in the Schedule table below.
- Suggest and vote on panel questions ahead and during the panel, which will be livestreamed below between 11.30am-12.30pm Eastern.
- Attend the live poster sessions with breakout rooms for each poster (Zoom links on schedule below under the list of "Accepted papers")
- Join the Rocket Chat or the Hallway Track Zoom here to meet and discuss with other workshop attendees throughout the day.
Streaming
Throughout the day we will be streaming (recorded) invited talks, (recorded) contributed talks, and a (live) panel discussion. Please see the Schedule for details on what is being (live)streamed and when. To engage in the workshop during the (live)stream, you can:
- Join the Join the Rocket Chat or the Hallway Track Zoom here to discuss the workshop content with other participants.
- Suggest and vote on panel discussion questions to help us put together the best discussion possible! Our invited speakers and panelists will discuss, live, a mix of curated and audience questions.
Poster Session
For each paper being presented at the workshop, we will host (1) the pre-recorded presentation from SlidesLive and (2) a Zoom breakout room during the poster session. The Zoom breakout rooms will be open only during the poster session timeslot (see the Schedule), during which authors will join the meeting rooms to allow you to ask them questions face-to-face.
-> The Zoom links for individual posters can be found under the list of "Accepted papers".
Hallway track Rocket Chat and Zoom
Throughout the day, we encourage attendees to participate in the "hallway track" discussions on Rocket Chat and/or on a dedicated Zoom meeting room (unmonitored and unmoderated), please follow the ICLR code of conduct.
-> The Hallway Track Rocket chat can be found here and the Hallway Track Zoom here.
Schedule (Eastern time)
Rows highlighted in green are LIVE whereas rows highlighted in yellow are PRE-RECORDED. The poster session is distributed across multiple ZOOM meetings, found in the list of "Accepted papers" below.
8:45am | Live opening remarks (Nicolas Papernot) | Livestream (above) |
---|---|---|
8:50am | Zico Kolter: TML_3 Beyond "provable" robustness: new directions in adversarial robustness | Recording (mirror site) |
9:30am | Lujo Bauer: TML_0 On the Susceptibility to Adversarial Examples Under Real-World Constraints | Recording |
10:10am | Seeta Peña Gangadharan: TML_1 Context, Research, Refusal: Perspectives on Abstract Problem-Solving | Recording (audio) Slides + Transcript |
10:50am | Timnit Gebru: TML_2 Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning | Recording |
11:30am | Live Panel moderated by Ram Shankar:
|
Livestream (above) Ask + vote questions |
Break | ||
1:00pm | Live poster session with all authors of accepted papers | Zoom links below under "Accepted papers" |
3:00pm | TML_4 Increasing the robustness of DNNs against image corruptions by playing the Game of Noise | Recording |
3:20pm | TML_5 Bounding Singular Values of Convolution Layers | Unable to present |
3:40pm | TML_6 Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy | Recording |
4:00pm | TML_7 Games for Fairness and Interpretability | Recording |
4:20pm | TML_8 Output Diversified Initialization for Adversarial Attacks | Recording |
4:40pm | TML_9 On the Benefits of Models with Perceptually-Aligned Gradients | Recording |
5:00pm | TML_10 Randomized Smoothing of All Shapes and Sizes | Recording |
5:20pm | TML_11 On Pruning Adversarially Robust Neural Networks | Recording |
5:40pm | TML_12 DADI: Dynamic Discovery of Fair Information with Adversarial Reinforcement Learning | Recording |
6:00pm | TML_13 Attacking Neural Text Detectors | Recording |
6:20pm | TML_14 Black-Box Smoothing: A Provable Defense for Pretrained Classifiers | Recording |
6:40pm | TML_15 Privacy-preserving collaborative machine learning on genomic data using TensorFlow | Recording |
7:00pm | TML_16 Preventing backdoor attacks via Student-Teacher Ensemble Training | Recording |
7:20pm | TML_17 Politics of Adversarial Machine Learning | Recording |
7:40pm | TML_18 Improved Wasserstein Attacks and Defenses | Recording |
8:00pm | TML_19 ADVERSARIAL ROBUSTNESS IN DATA AUGMENTATION | Recording |
End | - |
Accepted papers
- Zoom for poster session Increasing the robustness of DNNs against image corruptions by playing the Game of Noise Evgenia Rusak (University of Tuebingen); Lukas Schott (Max Planck Institute for Intelligent Systems and University of Tuebingen); Roland S. Zimmermann (University of Tuebingen); Julian Bitterwolf (University of Tuebingen); Oliver Bringmann (University of Tuebingen); Matthias Bethge (University of Tübingen); Wieland Brendel (University of Tuebingen)[paper]
- Bounding Singular Values of Convolution Layers Sahil Singla (University of Maryland); Soheil Feizi (University of Maryland)
- Zoom for poster session Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy Aditya Saligrama (MIT PRIMES); Guillaume Leclerc (MIT)
- Zoom for poster session Games for Fairness and Interpretability Eric Chu (Massachusetts Institute of Technology); Nabeel N Gillani (Massachusetts Institute of Technology); Sneha Priscilla Makini (Massachusetts Institute of Technology)[paper]
- Zoom for poster session Output Diversified Initialization for Adversarial Attacks Yusuke Tashiro (Stanford University); Yang Song (Stanford University); Stefano Ermon (Stanford)
- Zoom for poster session On the Benefits of Models with Perceptually-Aligned Gradients Gunjan Aggarwal (Adobe); Abhishek Sinha (Stanford); Mayank Singh (Adobe Systems); Nupur Kumari (Adobe Systems)
- Teams for poster session Randomized Smoothing of All Shapes and Sizes Greg Yang (Microsoft Research AI); Tony Duan (Microsoft Research); J. Edward Hu (Microsoft Research AI); Hadi Salman (Microsoft Research); Ilya Razenshteyn (Microsoft Research); Jerry Li (Microsoft)[paper]
- Zoom for poster session On Pruning Adversarially Robust Neural Networks Vikash Sehwag (Princeton University); Shiqi Wang (Columbia University); Prateek Mittal (Princeton University); Suman Jana (Columbia University)
- Zoom for poster session DADI: Dynamic Discovery of Fair Information with Adversarial Reinforcement Learning Michiel A Bakker (MIT); Duy Patrick Tu (MIT); Humberto Riveron Valdes (MIT); Krishna Gummadi (MPI-SWS); Kush R Varshney (IBM Research); Adrian Weller (University of Cambridge); Alex `Sandy' Pentland (MIT)
- Zoom for poster session Attacking Neural Text Detectors Maximilian Wolff (Viewpoint School)[paper]
- Teams for poster session Black-Box Smoothing: A Provable Defense for Pretrained Classifiers Hadi Salman (Microsoft Research AI); Mingjie Sun (Carnegie Mellon University); Greg Yang (Microsoft Research AI); Ashish Kapoor (Microsoft); Zico Kolter (Carnegie Mellon University)
- Zoom for poster session Privacy-preserving collaborative machine learning on genomic data using TensorFlow Cheng Hong (Alibaba Group); Zhicong Huang (Alibaba Group); Wen-jie Lu (Alibaba Group); Hunter Qu (Alibaba Group); Li Ma (Alibaba Health); Morten Dahl (Dropout Labs); Jason Mancuso (Dropout Labs)[paper]
- Zoom for poster session Preventing backdoor attacks via Student-Teacher Ensemble Training Nur Muhammad (Mahi) Shafiullah (Massachusetts Institute of Technology); Shibani Santurkar (MIT); Dimitris Tsipras (MIT); Aleksander Madry (MIT)
- Zoom for poster session Politics of Adversarial Machine Learning Kendra Albert (Harvard Law School ); Jonathon Penney (Citizen Lab (University of Toronto) / Dalhousie / Princeton CITP); Bruce Schneier (Belfer Center for Science and International Affairs, Harvard Kennedy School.); Ram Shankar Siva Kumar (Microsoft (Azure Security))[paper]
- Teams for poster session Improved Wasserstein Attacks and Defenses J. Edward Hu (Microsoft Research AI); Greg Yang (Microsoft Research AI); Adith Swaminathan (Microsoft Research); Hadi Salman (Microsoft Research)[paper]
- Zoom for poster session ADVERSARIAL ROBUSTNESS IN DATA AUGMENTATIONADVERSARIAL ROBUSTNESS IN DATA AUGMENTATION Hamid Eghbal-zadeh (LIT AI Lab & Johannes Kepler University, Institute of Computational Perception); Khaled Koutini (Johannes Kepler University); Verena Haunschmid (Johannes Kepler University Linz); Paul Primus (Johannes Kepler University); Michal Lewandowski (Software Competence Center Hagenberg); Werner Zellinger (Software Competence Center Hagenberg); Gerhard Widmer (Johannes Kepler University)[paper]
Sponsor
Thank you to the Open Philanthropy Project for sponsoring this event. Their grant will fund a best paper award.
Abstract
Abstract—As ML systems are pervasively deployed, security and privacy challenges became central to their design. The community produced a vast amount of work to address these challenges and increase trust in ML. Yet, much of it concentrates on well-defined problems that enable nice tractability from a mathematical perspective but are hard to translate to the threats that target real-world systems.
This workshop calls for novel research that addresses the security and privacy risks arising from the deployment of ML, from malicious exploitation of vulnerabilities (e.g., adversarial examples or data poisoning) to concerns on fair, ethical and privacy-preserving uses of data. We aim to provide a home to new ideas “outside the box”, even if proposed preliminary solutions do not match the performance guarantees of known techniques. We believe that such ideas could prove invaluable to more effectively spur new lines of research that make ML more trustworthy.
We aim to bring together experts from a variety of communities (ML, computer security, data privacy, fairness & ethics) in an effort to synthesize promising ideas and research directions, as well as foster and strengthen cross-community collaborations. Indeed, many fundamental problems studied in these diverse areas can be broadly recast as questions around the (in-)stability of ML models: generalization in ML, model memorization in privacy, adversarial examples in security, model bias in fairness and ethics, etc. Problems that we hope to encourage progress on are:
(#1) Adversarial robustness beyond Lp balls. Recent years have seen a tremendous amount of research devoted to making ML models robust to small test-time perturbations sampled adversarially from an Lp-ball. While seemingly simple, this has proven a difficult challenge that remains mostly unsolved today. Yet, even if robustness in an Lp ball were to be achieved, complete model robustness would still be far from guaranteed. We encourage researchers to move beyond this “toy” problem to characterize the robustness of real-world systems for which adversarial examples pose a threat (e.g., malware detection, visual ad-blocking, voice assistants, etc...). We hope that specificities of these systems and of their deployments may point towards alternative—and more easily attainable—avenues towards secure inference.
(#2) Stateful robustness. Current adversarial example research focuses on securing a classifier for all possible use cases. This has proven to be extremely difficult and to date few solutions come close. However, when deployed, ML classifiers are not stateless systems that must respond to arbitrary inputs. Can we make use of additional knowledge (e.g., by making the classifier stateful, or by tailoring the defense to be deployed in one setting) which improves our ability to design defenses? Further, it might also be useful to think about ways to ensure graceful degradation of classifier performance in critical applications. For instance, instead of aiming to obtain robust classifiers that always accurately predict, it might be sufficient to get models that can fail gracefully (e.g., say “don’t know” or “the class is either cat or dog”).
(#3) ML techniques tailored for privacy. Current approaches in the literature “tailor” privacy solutions to ML. Whether based on cryptography (e.g., homomorphic encryption) or statistical tools (e.g., differential privacy), they often aim to add privacy to existing ML techniques. We believe that the orthogonal approach, of designing new ML models or algorithms that are better suited for privacy-preserving techniques, is heavily underrepresented. We hope to encourage preliminary explorations in this space, even if they currently fail to reach state-of-the-art results.
(#4) Incentives in ML fairness and ethics. Current approaches to ML fairness and ethics assume that the ML model owner is willing to collaborate and implement proposed solutions. However, the owner does not always have the incentives, the knowledge, or the means, to implement these solutions. We encourage the community to think about solutions that consider the model owner as adversarial and attempt to increase fairness “from the outside” of the model, e.g., modifying its inputs during training or inference. As part of this reflection, we hope submissions to the workshop will challenge existing definitions of ethics in machine learning.
(#5) Friendly uses of adversarial ML. Adversarial ML is usually considered negative. This stems from the assumption that model owners are honest and ethical. However, ML is deployed in many real-world scenarios with questionable motives (e.g., privacy-invasive applications, social sorting). In such scenarios, adversarial machine learning may become a golden standard to protect users and communities. We welcome applications of adversarial techniques used to build solutions that help combating unethical machine learning applications.
Organizing Committee
Nicolas Papernot
[Chair]
Google Brain
Florian Tramer
[Co-chair]
Stanford University
Carmela Troncoso
EPFL
Nicholas Carlini
Google Brain
Shibani Santurkar
MIT
Program Committee
- Adria Gascon (The Alan Turing Institute)
- Akshayvarun Subramanya (University of Maryland, Baltimore County)
- Anand Sarwate (Rutgers University)
- Aniruddha Saha (University of Maryland, Baltimore County)
- Anish Athalye (Massachusetts Institute of Technology)
- Asia Biega (Max Planck Institute for Informatics)
- Aurélien Bellet (INRIA)
- Aylin Caliskan (George Washington University)
- Berkay Celik (Purdue University)
- Bogdan Kulynych (EPFL)
- Catuscia Palamidessi (Laboratoire d'informatique de l'École polytechnique)
- Congzheng Song (Cornell University)
- Dan Hendrycks (UC Berkeley)
- Dimitris Tsipras (Massachusetts Institute of Technology)
- Earlence Fernandes (University of Wisconsin-Madison)
- Eric Wong (Carnegie Mellon University)
- Fartash Faghri (University of Toronto)
- Giovanni Cherubin (EPFL)
- Hadi Salman (Microsoft research)
- Jamie Hayes (University College London)
- Jason Martin (Intel Corporation)
- Jerry Li (Microsoft Research)
- Jonas Rauber (Max Planck Research School for Intelligent Systems)
- Julius Adebayo (Massachusetts Institute of Technology)
- Kassem Fawaz (University of Wisconsin-Madison)
- Kathrin Grosse (CISPA Helmholtz Center / SIC)
- Kristian Lum (Human Rights Data Analysis Group)
- Mahmood Sharif (Carnegie Mellon University)
- Maksym Andriushchenko (EPFL)
- Matthew Jagielski (Northeastern University)
- Octavian Suciu (University of Maryland)
- Pin-Yu Chen (IBM Research AI)
- Sanghyun Hong (University of Maryland, College Park)
- Seda Guerses (KU Leuven)
- Shruti Tople (Microsoft Research)
- Shuang Song (Google)
- Sven Gowal (DeepMind)
- Varun Chandrasekaran (University of Wisconsin-Madison)
- Yair Zick (National University of Singapore)
- Yang Zhang (CISPA Helmholtz Center for Information Security)
- Yizheng Chen (Columbia University)
Call For Papers
Submission deadline: February 12th, 2020 Anywhere on Earth (AoE)
Notification sent to authors: February 25, 2020
Submission server: https://cmt3.research.microsoft.com/ICLRTML2020/
The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or a poster presentation (with a lightning talk).
Submitted papers are expected to introduce novel ideas or results. Submissions should follow the ICLR format and not exceed 4 pages (excluding references, appendices or large figures).
Work that has been previously published (including in the ICLR 2020 main conference) will not be accepted at the workshop.
We invite submissions on any aspect of machine learning that relates to computer security (and vice versa). This includes, but is not limited to:
- Adversarial Robustness: New approaches that may be risky and are different than the existing literature. Reviewers will pay special attention to the stated threat models and its motivation. Threat models beyond Lp norms are encouraged.
- Real-world attacks: Apply an existing known (academic) threat to a deployed-in-production system to show how it fails.
- Training time attacks and defenses Develop new approaches that study the threat model where the adversary has access to the training data or algorithm.
- Evaluating privacy of models: Better and broader quantification methods to measure to what extent models trained on sensitive data reveal their training data.
- ML algorithms for private learning: new ML models or algorithms that are better suited for privacy-preserving techniques, rather than retroactively adapt existing ML algorithms to be private.
- Alternate uses of secure and private learning: Evaluate other benefits of training models to be robust or private.
- Unintended consequences of secure or private learning Find unintended consequences of training robust or private models, e.g., on fairness.
- Evaluating stealing robustness New methods to quantify the difficulty of stealing trained ML models and develop defenses against it.
- Ethical machine learning Definitions and applications of ethics when considering security and privacy aspects in machine learning.
- Fresh look on incentives in ML: Solutions that consider the model owner as adversarial and attempt to increase privacy, fairness, equality, etc “from the outside” of the model.
- Foundations for secure or private learning: Introduce proposals for formal foundations of secure or private learning
- Position papers: State a new controversial positions or a research agendas that areis under-studied.
When relevant, submissions are encouraged to clearly state their threat model, release open-source code and take particular care in conducting ethical research. Reviewing will be performed in a single-blind fashion (reviewers will be anonymous but not authors). Reviewing criteria include (a) relevance, (b) quality of the methodology and experiments, (c) originality.
This workshop will not have proceedings.
Contact trustworthyiclr20@googlegroups.com for any questions.