Towards Trustworthy ML:
Rethinking Security and Privacy for ML

ICLR 2020 Workshop

Date: April 26, 2020 (Sunday)

Location: Online Millennium Hall, Addis Ababa, Ethiopia (co-located with ICLR 2020)

Contact: trustworthyiclr20@googlegroups.com (this will email all organizers)


How to participate?

Streaming

Throughout the day we will be streaming (recorded) invited talks, (recorded) contributed talks, and a (live) panel discussion. Please see the Schedule for details on what is being (live)streamed and when. To engage in the workshop during the (live)stream, you can:

Poster Session

For each paper being presented at the workshop, we will host (1) the pre-recorded presentation from SlidesLive and (2) a Zoom breakout room during the poster session. The Zoom breakout rooms will be open only during the poster session timeslot (see the Schedule), during which authors will join the meeting rooms to allow you to ask them questions face-to-face.

-> The Zoom links for individual posters can be found under the list of "Accepted papers".

Hallway track Rocket Chat and Zoom

Throughout the day, we encourage attendees to participate in the "hallway track" discussions on Rocket Chat and/or on a dedicated Zoom meeting room (unmonitored and unmoderated), please follow the ICLR code of conduct.

-> The Hallway Track Rocket chat can be found here and the Hallway Track Zoom here.

Schedule (Eastern time)

Rows highlighted in green are LIVE whereas rows highlighted in yellow are PRE-RECORDED. The poster session is distributed across multiple ZOOM meetings, found in the list of "Accepted papers" below.

8:45am Live opening remarks (Nicolas Papernot) Livestream (above)
8:50am Zico Kolter: TML_3 Beyond "provable" robustness: new directions in adversarial robustness Recording
(mirror site)
9:30am Lujo Bauer: TML_0 On the Susceptibility to Adversarial Examples Under Real-World Constraints Recording
10:10am Seeta Peña Gangadharan: TML_1 Context, Research, Refusal: Perspectives on Abstract Problem-Solving Recording (audio)
Slides + Transcript
10:50am Timnit Gebru: TML_2 Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning Recording
11:30am Live Panel moderated by Ram Shankar:
  • Zico Kolter
  • Lujo Bauer
  • Seeta Peña Gangadharan
  • Timnit Gebru
  • Justin Gilmer
Livestream (above)

Ask + vote questions
Break
1:00pm Live poster session with all authors of accepted papers Zoom links below under
"Accepted papers"
3:00pm TML_4 Increasing the robustness of DNNs against image corruptions by playing the Game of Noise Recording
3:20pm TML_5 Bounding Singular Values of Convolution Layers Unable to present
3:40pm TML_6 Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy Recording
4:00pm TML_7 Games for Fairness and Interpretability Recording
4:20pm TML_8 Output Diversified Initialization for Adversarial Attacks Recording
4:40pm TML_9 On the Benefits of Models with Perceptually-Aligned Gradients Recording
5:00pm TML_10 Randomized Smoothing of All Shapes and Sizes Recording
5:20pm TML_11 On Pruning Adversarially Robust Neural Networks Recording
5:40pm TML_12 DADI: Dynamic Discovery of Fair Information with Adversarial Reinforcement Learning Recording
6:00pm TML_13 Attacking Neural Text Detectors Recording
6:20pm TML_14 Black-Box Smoothing: A Provable Defense for Pretrained Classifiers Recording
6:40pm TML_15 Privacy-preserving collaborative machine learning on genomic data using TensorFlow Recording
7:00pm TML_16 Preventing backdoor attacks via Student-Teacher Ensemble Training Recording
7:20pm TML_17 Politics of Adversarial Machine Learning Recording
7:40pm TML_18 Improved Wasserstein Attacks and Defenses Recording
8:00pm TML_19 ADVERSARIAL ROBUSTNESS IN DATA AUGMENTATION Recording
End -

Accepted papers

Sponsor

Thank you to the Open Philanthropy Project for sponsoring this event. Their grant will fund a best paper award.

Abstract

Abstract—As ML systems are pervasively deployed, security and privacy challenges became central to their design. The community produced a vast amount of work to address these challenges and increase trust in ML. Yet, much of it concentrates on well-defined problems that enable nice tractability from a mathematical perspective but are hard to translate to the threats that target real-world systems.

This workshop calls for novel research that addresses the security and privacy risks arising from the deployment of ML, from malicious exploitation of vulnerabilities (e.g., adversarial examples or data poisoning) to concerns on fair, ethical and privacy-preserving uses of data. We aim to provide a home to new ideas “outside the box”, even if proposed preliminary solutions do not match the performance guarantees of known techniques. We believe that such ideas could prove invaluable to more effectively spur new lines of research that make ML more trustworthy.

We aim to bring together experts from a variety of communities (ML, computer security, data privacy, fairness & ethics) in an effort to synthesize promising ideas and research directions, as well as foster and strengthen cross-community collaborations. Indeed, many fundamental problems studied in these diverse areas can be broadly recast as questions around the (in-)stability of ML models: generalization in ML, model memorization in privacy, adversarial examples in security, model bias in fairness and ethics, etc. Problems that we hope to encourage progress on are:

(#1) Adversarial robustness beyond Lp balls. Recent years have seen a tremendous amount of research devoted to making ML models robust to small test-time perturbations sampled adversarially from an Lp-ball. While seemingly simple, this has proven a difficult challenge that remains mostly unsolved today. Yet, even if robustness in an Lp ball were to be achieved, complete model robustness would still be far from guaranteed. We encourage researchers to move beyond this “toy” problem to characterize the robustness of real-world systems for which adversarial examples pose a threat (e.g., malware detection, visual ad-blocking, voice assistants, etc...). We hope that specificities of these systems and of their deployments may point towards alternative—and more easily attainable—avenues towards secure inference.

(#2) Stateful robustness. Current adversarial example research focuses on securing a classifier for all possible use cases. This has proven to be extremely difficult and to date few solutions come close. However, when deployed, ML classifiers are not stateless systems that must respond to arbitrary inputs. Can we make use of additional knowledge (e.g., by making the classifier stateful, or by tailoring the defense to be deployed in one setting) which improves our ability to design defenses? Further, it might also be useful to think about ways to ensure graceful degradation of classifier performance in critical applications. For instance, instead of aiming to obtain robust classifiers that always accurately predict, it might be sufficient to get models that can fail gracefully (e.g., say “don’t know” or “the class is either cat or dog”).

(#3) ML techniques tailored for privacy. Current approaches in the literature “tailor” privacy solutions to ML. Whether based on cryptography (e.g., homomorphic encryption) or statistical tools (e.g., differential privacy), they often aim to add privacy to existing ML techniques. We believe that the orthogonal approach, of designing new ML models or algorithms that are better suited for privacy-preserving techniques, is heavily underrepresented. We hope to encourage preliminary explorations in this space, even if they currently fail to reach state-of-the-art results.

(#4) Incentives in ML fairness and ethics. Current approaches to ML fairness and ethics assume that the ML model owner is willing to collaborate and implement proposed solutions. However, the owner does not always have the incentives, the knowledge, or the means, to implement these solutions. We encourage the community to think about solutions that consider the model owner as adversarial and attempt to increase fairness “from the outside” of the model, e.g., modifying its inputs during training or inference. As part of this reflection, we hope submissions to the workshop will challenge existing definitions of ethics in machine learning.

(#5) Friendly uses of adversarial ML. Adversarial ML is usually considered negative. This stems from the assumption that model owners are honest and ethical. However, ML is deployed in many real-world scenarios with questionable motives (e.g., privacy-invasive applications, social sorting). In such scenarios, adversarial machine learning may become a golden standard to protect users and communities. We welcome applications of adversarial techniques used to build solutions that help combating unethical machine learning applications.

Organizing Committee

Nicolas Papernot
[Chair]
Google Brain

Florian Tramer
[Co-chair]
Stanford University

Carmela Troncoso
EPFL

Nicholas Carlini
Google Brain

Shibani Santurkar
MIT

Program Committee

Call For Papers

Submission deadline: February 12th, 2020 Anywhere on Earth (AoE)

Notification sent to authors: February 25, 2020

Submission server: https://cmt3.research.microsoft.com/ICLRTML2020/

The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or a poster presentation (with a lightning talk).

Submitted papers are expected to introduce novel ideas or results. Submissions should follow the ICLR format and not exceed 4 pages (excluding references, appendices or large figures).

Work that has been previously published (including in the ICLR 2020 main conference) will not be accepted at the workshop.

We invite submissions on any aspect of machine learning that relates to computer security (and vice versa). This includes, but is not limited to:

When relevant, submissions are encouraged to clearly state their threat model, release open-source code and take particular care in conducting ethical research. Reviewing will be performed in a single-blind fashion (reviewers will be anonymous but not authors). Reviewing criteria include (a) relevance, (b) quality of the methodology and experiments, (c) originality.

This workshop will not have proceedings.

Contact trustworthyiclr20@googlegroups.com for any questions.