Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Rashomon Set of Private Models

Rakshit, Patrik, Ulrich, (Juba?)


Contents

- Background
- Introducing “Rashomon sensitivity”
- Metrics
- Utility
- Privacy
- Fairness
- Connections
Background :

- DPSGD
- Rashomon set
DPSGD
Reference:

Martin Abadi, Andy Chu, Ian Goodfellow, H.


Brendan McMahan, Ilya Mironov, Kunal Talwar,
and Li Zhang. 2016. Deep Learning with
Differential Privacy. In Proceedings of the 2016
ACM SIGSAC Conference on Computer and
Communications Security (CCS '16).
Association for Computing Machinery, New
York, NY, USA, 308–318.
https://doi.org/10.1145/2976749.2978318
DPSGD
Reference:

Martin Abadi, Andy Chu, Ian Goodfellow, H.


Brendan McMahan, Ilya Mironov, Kunal Talwar,
and Li Zhang. 2016. Deep Learning with
Differential Privacy. In Proceedings of the 2016
ACM SIGSAC Conference on Computer and
Communications Security (CCS '16).
Association for Computing Machinery, New
York, NY, USA, 308–318.
https://doi.org/10.1145/2976749.2978318
Rashomon Set
- “A set of models which all perform roughly equally well is called a Rashomon set.” [1]

- Some works call a variation of this as “predictive multiplicity” :


- predictive multiplicity: the ability of a prediction problem to admit competing models with
conflicting predictions [2]

[1] - Machine Learning Notes At Random (MLNAR)


https://www.mlnar.com/rashomon-sets.html
[2] - Charles T. Marx, Flavio Du Pin Calmon, and Berk Ustun. 2020. Predictive multiplicity in
classification. In Proceedings of the 37th International Conference on Machine Learning
(ICML'20), Vol. 119. JMLR.org, Article 628, 6765–6774.
Defining our Rashomon set

- In my experiments, I consider
- Same architecture, same training set and batch size
- Different hyperparameters (learning rate, random seed, optimizer)

A set of private (DPSGD) models having similar metric scores – Rashomon set of
private models
Defining our Rashomon set

- In my experiments, I consider
- Same architecture, same training set and batch size
- Different hyperparameters (learning rate, random seed, optimizer)

A set of DPSGD models having similar metric scores – Rashomon set of


private models
Understanding tradeoffs
Utility

Metrics
Privacy Fairness
(1) Utility
- In a utility rashomon set, all models in the set have (nearly) Equal test
accuracy.

(2) Privacy
- In a privacy rashomon set, all models in the set have (nearly) Equal privacy
level (SAME~Similar VALUES OF EPSILON).

(3) Fairness
- In a fairness rashomon set, all models in the set have (nearly) Equal fairness
level (here, we compute fairness based on different metrics).
- For simplicity, I will only consider Demographic Parity for now.
- This can be extended easily to Equalized Odds/Equality of Opportunity.
“Rashomon sensitivity”

- The sensitivity of the set of models in a rashomon set.


- Consider a Rashomon set of models :
- We define Rashomon sensitivity as
Hypothesis and Main contribution(s)

- Rashomon sensitivity of Private (DP-SGD) models > Rashomon sensitivity of


non-private models.

This is bad. Why?

- Leads to higher uncertainty in predictions.


- Balancing one (eg. privacy), leads to downfall in other metrics (eg, utility
and fairness).
- Misclassification at an individual level is higher in private models.
Can lead to Another interesting result →
– Connections to Lipschitz property?
d := Rashomon Sensitivity (R_s), Reference
M_x, M_y := M_i, M_j in R_s Cynthia Dwork, Moritz Hardt, Toniann
– Pitassi, Omer Reingold, and Richard
Zemel. 2012. Fairness through
awareness. In Proceedings of the 3rd
Innovations in Theoretical Computer
Science Conference (ITCS '12).
Association for Computing Machinery,
New York, NY, USA, 214–226.
https://doi.org/10.1145/2090236.20902
55

- Some Theorems (like 3.3) may go through. Need to look into it further.
Next steps
- Setting up experiments for getting Rashomon set models of these 3 metrics :
Utility, Privacy and Fairness.

You might also like