Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Formulaire de renseignements du mémoire de Master

Année Universitaire : 2023 / 2024

Code Mémoire: 24/1790M

IDENTIFICATION DES INTERVENANTS


Titre du mémoire Differential Privacy in Meta Learning

Spécialités Systèmes Informatiques

Stagiaires Matricule: 19/0183


Nom: BEN ABDELATIF Prénom: MAYA
Email: jm_benabdelatif@esi.dz

Affiliation du promoteur Organisme: conservatoire des arts et metiers paris france


Adresse: , France
Nom et Prénom: Samia Bouzefrane
Tel: None, Email: samia.bouzefrane@lecnam.net

Co-encadrants BALLA Amar

DESCRIPTION DU PROJET DE FIN D'ETUDES


Résumé In recent years, Deep learning techniques have seen a great success in a variety of fields. However,
successful cases are mainly concentrated in the areas where a large amount of data can be collected
or simulated and areas where a large amount of computing resources can be used, excluding many
areas where data is scarce and expensive or where computing resources are unavailable [1]. This
highlights the need for training methods that can work effectively with only a few samples of data.
Meta learning is one of such techniques that received increasing attention recently because of its
ability to enable fast learning and adaptation to new tasks with a small amount of data. It consists of
the process in which existing learning tasks are used to infer a learning rule that enables faster
adaptation to new tasks from the same environment. This is known as “learning how to learn” [2].

There have been several well-known methods proposed in the context of meta learning (MAML [3],
Reptile [4], Meta-SGD [5], etc.). However, the nature of the meta learning requires sharing of task-
specific information, which may rise serious privacy risks, especially in distributed settings where all
tasks (users) are participating into a meta training process by communicating meta-model updates
among themselves, typically with the assistance of a central server as in the federated learning
process. Studies have shown that in a single-task setting, it is possible for an adversary with only
access to the model to learn detailed information about the training set, such as the presence or
absence of specific records [6,7] or a specific property about the global population in the dataset [8].
Furthermore, recent works showed that deep neural networks can effectively memorize user-unique
training examples, which can be recovered even after only a single epoch of training [9,10,11,12].
Therefore, it is crucial to ensure that the sensitive information in every task (dataset) remains private
through the meta learning process.

One solution to address this concern is to use differential privacy (DP). This technique is becoming a
standard in the domain of privacy preserving machine learning and provides a strong guarantee at
preventing leakages of data by adding noise to the data. However, the challenge is how to balance
the tradeoff between privacy and model utility. Despite its critical importance, the privacy aspect of
meta learning remains largely unexplored [13,14], presenting an intriguing avenue for future
research.

Mots clés Meta Learning, learning to learn, Differential Privacy, privacy preserving machine learning, Federated
Learning

Objectifs The objective of the Master is to write a report exploring the diverse existing applications of
Differential Privacy in the field of Meta Learning.

Résultats The anticipated outcome includes a comprehensive paper introducing the Differential Privacy
attendus method, highlighting its diverse applications in the machine learning domain. The document will
further provide clear definitions of meta learning and federated meta learning, while analysing the
different applications of Differential Privacy within these contexts.

Antécédents Currently a PhD thesis is undertaken on differential privacy and federated learning.
du travail
demandé

Bibliographie [1] Hospedales, T., Antoniou, A., Micaelli, P., & Storkey, A. (2021). Meta-learning in neural networks: A
survey. IEEE transactions on pattern analysis and machine intelligence, 44(9), 5149-5169.

[2] J. Schmidhuber, “Evolutionary principles in self-referential learning. On learning now to learn: The
meta-meta-meta...-hook,” Diploma thesis, Technische Universitat Munchen, Munich, Germany, 1987.

[3] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep
networks,” in Proc. 34th Int. Conf. Mach. Learn., 2017, pp. 1126–1135.

[4] A. Nichol, J.S. Openai, Reptile: a scalable metalearning algorithm, arXiv preprint
arXiv:1803.02999. 2 (2018) 4.

[5] Z. Li, F. Zhou, F. Chen, and H. Li, “Meta-SGD: Learning to learn quickly for few shot learning,”
2017, arXiv: 1707.09835.

[6] Shokri, R., Stronati, M., Song, C., & Shmatikov, V. (2017, May). Membership inference attacks
against machine learning models. In 2017 IEEE symposium on security and privacy (SP) (pp. 3-18).
IEEE.

[7] Gu, Y.; Bai, Y.; Xu, S. CS-MIA: Membership inference attack based on prediction confidence series
in federated learning. Journal of 902 Information Security and Applications 2022, 67, 103201.
https://doi.org/10.1016/j.jisa.2022.103201.

[8] Zhou, J.; Chen, Y.; Shen, C.; Zhang, Y. Property Inference Attacks Against GANs, 2021.
arXiv:2111.07608 [cs, stat].

[9] Zhu, L.; Liu, Z.; Han, S. Deep leakage from gradients. Advances in neural information processing
systems 2019, 32.

[10] Zhao, B.; Mopuri, K.R.; Bilen, H. idlg: Improved deep leakage from gradients. arXiv preprint
arXiv:2001.02610 2020.
[11] Yin, H.; Mallya, A.; Vahdat, A.; Alvarez, J.M.; Kautz, J.; Molchanov, P. See through Gradients:
Image Batch Recovery via 920 GradInversion. In Proceedings of the 2021 IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR); IEEE: 921 Nashville, TN, USA, 2021; pp. 16332–
16341. https://doi.org/10.1109/CVPR46437.2021.01607.

[12] Ren, H.; Deng, J.; Xie, X. Grnn: generative regression neural network—a data leakage attack for
federated learning. ACM 834 Transactions on Intelligent Systems and Technology (TIST) 2022, 13, 1–
24.

[13] Li, J., Khodak, M., Caldas, S., & Talwalkar, A. (2019). Differentially private meta-learning. arXiv
preprint arXiv:1909.05830.

[14] Zhou, X., & Bassily, R. (2022). Task-level Differentially Private Meta Learning. Advances in Neural
Information Processing Systems, 35, 20947-20959.

Echéancier

Moyens A la charge de l'organisme d'accueil


informatiques

Projet de None
recherche
Code Mémoire: 24/1790M

You might also like