Towards Trustworthy ML:
Rethinking Security and Privacy for ML

ICLR 2020 Workshop

Date: April 26, 2020 (Sunday)

Location: Millennium Hall, Addis Ababa, Ethiopia (co-located with ICLR 2020)

Contact: trustworthyiclr20@googlegroups.com (this will email all organizers)

Room: TBD

Abstract—As ML systems are pervasively deployed, security and privacy challenges became central to their design. The community produced a vast amount of work to address these challenges and increase trust in ML. Yet, much of it concentrates on well-defined problems that enable nice tractability from a mathematical perspective but are hard to translate to the threats that target real-world systems.

This workshop calls for novel research that addresses the security and privacy risks arising from the deployment of ML, from malicious exploitation of vulnerabilities (e.g., adversarial examples or data poisoning) to concerns on fair, ethical and privacy-preserving uses of data. We aim to provide a home to new ideas “outside the box”, even if proposed preliminary solutions do not match the performance guarantees of known techniques. We believe that such ideas could prove invaluable to more effectively spur new lines of research that make ML more trustworthy.

We aim to bring together experts from a variety of communities (ML, computer security, data privacy, fairness & ethics) in an effort to synthesize promising ideas and research directions, as well as foster and strengthen cross-community collaborations. Indeed, many fundamental problems studied in these diverse areas can be broadly recast as questions around the (in-)stability of ML models: generalization in ML, model memorization in privacy, adversarial examples in security, model bias in fairness and ethics, etc. Problems that we hope to encourage progress on are:

(#1) Adversarial robustness beyond Lp balls. Recent years have seen a tremendous amount of research devoted to making ML models robust to small test-time perturbations sampled adversarially from an Lp-ball. While seemingly simple, this has proven a difficult challenge that remains mostly unsolved today. Yet, even if robustness in an Lp ball were to be achieved, complete model robustness would still be far from guaranteed. We encourage researchers to move beyond this “toy” problem to characterize the robustness of real-world systems for which adversarial examples pose a threat (e.g., malware detection, visual ad-blocking, voice assistants, etc...). We hope that specificities of these systems and of their deployments may point towards alternative—and more easily attainable—avenues towards secure inference.

(#2) Stateful robustness. Current adversarial example research focuses on securing a classifier for all possible use cases. This has proven to be extremely difficult and to date few solutions come close. However, when deployed, ML classifiers are not stateless systems that must respond to arbitrary inputs. Can we make use of additional knowledge (e.g., by making the classifier stateful, or by tailoring the defense to be deployed in one setting) which improves our ability to design defenses? Further, it might also be useful to think about ways to ensure graceful degradation of classifier performance in critical applications. For instance, instead of aiming to obtain robust classifiers that always accurately predict, it might be sufficient to get models that can fail gracefully (e.g., say “don’t know” or “the class is either cat or dog”).

(#3) ML techniques tailored for privacy. Current approaches in the literature “tailor” privacy solutions to ML. Whether based on cryptography (e.g., homomorphic encryption) or statistical tools (e.g., differential privacy), they often aim to add privacy to existing ML techniques. We believe that the orthogonal approach, of designing new ML models or algorithms that are better suited for privacy-preserving techniques, is heavily underrepresented. We hope to encourage preliminary explorations in this space, even if they currently fail to reach state-of-the-art results.

(#4) Incentives in ML fairness and ethics. Current approaches to ML fairness and ethics assume that the ML model owner is willing to collaborate and implement proposed solutions. However, the owner does not always have the incentives, the knowledge, or the means, to implement these solutions. We encourage the community to think about solutions that consider the model owner as adversarial and attempt to increase fairness “from the outside” of the model, e.g., modifying its inputs during training or inference. As part of this reflection, we hope submissions to the workshop will challenge existing definitions of ethics in machine learning.

(#5) Friendly uses of adversarial ML. Adversarial ML is usually considered negative. This stems from the assumption that model owners are honest and ethical. However, ML is deployed in many real-world scenarios with questionable motives (e.g., privacy-invasive applications, social sorting). In such scenarios, adversarial machine learning may become a golden standard to protect users and communities. We welcome applications of adversarial techniques used to build solutions that help combating unethical machine learning applications.

Sponsor

Thank you to the Open Philanthropy Project for sponsoring this event. Their grant will fund a best paper award as well as support for travel.

Call For Papers

Submission deadline: January 31, 2020 Anywhere on Earth (AoE)

Notification sent to authors: February 25, 2020

Submission server: https://cmt3.research.microsoft.com/ICLRTML2020/

The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or a poster presentation (with a lightning talk).

Submitted papers are expected to introduce novel ideas or results. Submissions should follow the ICLR format and not exceed 4 pages (excluding references, appendices or large figures).

Work that has been previously published (including in the ICLR 2020 main conference) will not be accepted at the workshop.

We invite submissions on any aspect of machine learning that relates to computer security (and vice versa). This includes, but is not limited to:

When relevant, submissions are encouraged to clearly state their threat model, release open-source code and take particular care in conducting ethical research. Reviewing will be performed in a single-blind fashion (reviewers will be anonymous but not authors). Reviewing criteria include (a) relevance, (b) quality of the methodology and experiments, (c) originality.

This workshop will not have proceedings.

Contact trustworthyiclr20@googlegroups.com for any questions.

Schedule

TBA

Organizing Committee

Nicolas Papernot
[Chair]
University of Toronto and Vector Institute

Florian Tramer
[Co-chair]
Stanford University

Carmela Troncoso
EPFL

Nicholas Carlini
Google Brain

Shibani Santurkar
MIT

Program Committee