Professional Documents
Culture Documents
1st Review PPT B8
1st Review PPT B8
18CSP107L-MINOR PROJECT
Federated Learning in IOT : Privacy and Efficiency Consideration
BATCH NUMBER : 8
Team Members Supervisor
Priyanshu Dash (RA2111027020088) Dr. A. Umamageswari
Gurucharan S (RA2111027020079) Associate Professor / CSE
Mohammed Safrith D
(RA2111027020068)
AGENDA
• Abstract
• Objective
• Scope and Motivation
• Existing System
• Literature Survey
• Summary of Issues
• Proposed System
• Advantages of Proposed System
• Novel idea
• Architecture Diagram
• Tools to be used
• Application
• Conclusion
• References (Base paper to be included)
ABSTRACT
This project investigates the application of federated learning in
IoT environments to enhance privacy and efficiency. By keeping
data on local devices and only sharing model updates, federated
learning mitigates privacy risks and reduces communication
overhead. The goal is to develop and evaluate a federated
learning framework tailored for IoT systems.
OBJECTIVES
• To Implement federated learning to keep data on local IoT devices.
MOTIVATION
10. Hybrid IoT Device IEEE Internet 2024 FL enables collaborative model training
Selection With Knowledge of Things across IoT devices, but resource
Transfer for Federated Journal
constraints pose challenges. Our
Learning ( Volume: 11,
Issue: 7) method formulates resource constraints
as a multiobjective problem, obtaining
Pareto-optimal solutions, and utilizes
KT mechanisms to expedite
convergence.
SUMMARY OF ISSUES
1. Data Heterogeneity
Non-IID Data: Data across different devices may not be independent and identically distributed (IID), causing model
training difficulties.
Variability in Data Quality: The quality and quantity of data can vary significantly between devices, affecting model
performance.
2. Communication Overhead
Bandwidth Limitations: Frequent communication between devices and the central server can strain bandwidth and network
resources.
Latency: High latency in communication can slow down the training process.
Malicious Attacks: Devices can be compromised, leading to poisoning attacks (where false data is sent) or model inversion
attacks.
4. Resource Constraints
Computational Power: Not all devices have the same computational capabilities, which can lead to imbalances in
processing and training times.
1. Device Layer
IoT Devices: Collect and process local data, train local models, and share model updates with cluster
heads.
Local Training: Perform model training on local data and apply differential privacy techniques to updates.
2. Cluster Layer
Dynamic Clustering: Form clusters based on proximity, network conditions, and device capabilities.
Cluster Heads: Aggregate updates from devices within the cluster and send aggregated updates to the
central server or intermediate aggregators.
3. Intermediate Layer
Intermediate Aggregators: Further reduce communication overhead by aggregating updates from
multiple cluster heads before sending to the central server.
4. Central Layer
Central Server: Aggregates updates from cluster heads or intermediate aggregators, updates the global
model, and disseminates it back to cluster heads.
ADVANTAGES OF PROPOSED SYSTEM
1. Hierarchical Aggregation:
•Aggregates model updates at the cluster level before sending them to the central server, reducing data
transmission and bandwidth usage.
•Intermediate aggregators in large deployments handle some aggregation tasks, decreasing the load on the
central server.
2. Adaptive Communication Protocol:
•Bandwidth Optimization: Adjusts update frequency and size based on network conditions and device
capabilities, ensuring efficient bandwidth usage and preventing congestion.
•Latency Reduction: Prioritizes updates based on network latency and reliability for timely model
updates, accelerating the training process.
3. Dynamic Clustering:
Groups devices based on proximity and network conditions, enabling efficient local communication within
clusters and reducing data transmission distance and time.
NOVEL IDEA
This project combines differential privacy or similar techniques
with communication efficiency measures (selective participation,
model compression) to achieve a privacy-preserving and efficient
federated learning system specifically tailored for smart home
anomaly detection.
ARCHITECTURE DIAGRAM
TOOLS TO BE USED
• TensorFlow Federated (TFF): A library for federated learning designed to work with
TensorFlow, enabling simulations of federated algorithms on local devices.
• PySyft: An open-source library for encrypted, privacy-preserving machine learning,
supporting federated learning, differential privacy, and encrypted computation.
• PyTorch: A popular deep learning library that can be used alongside PySyft for federated
learning experiments.
• NumPy and Pandas: Essential libraries for data manipulation and numerical computations.
• Scikit-learn: A machine learning library that can be used for preliminary data analysis and
traditional machine learning algorithms.
APPLICATION
Smart Homes: Improve privacy and efficiency in smart home devices.
Wearable Devices: Ensure privacy and efficiency in health and fitness trackers.
CONCLUSION
REFERENCES
[1] L.S. Vailshery, "Number of IoT connected devices worldwide 2019–2021, with forecasts to 2030,"
2022.
[2] S.F. Ahmed, M.S.B. Alam, S. Afrin, S.J. Rafa, N. Rafa, and A.H. Gandomi, "Insights into Internet of
Medical Things (IoMT): Data fusion, security issues and potential solutions," Inf. Fusion, vol. 102, 2024.
[3] R. Sanchez-Iborra and A.F. Skarmeta, "TinyML-enabled frugal smart objects: Challenges and
opportunities," IEEE Circuits Syst. Mag., vol. 20, no. 3, pp. 4-18, 2020.
[4] P.P. Ray, "A review on TinyML: State-of-the-art and prospects," J. King Saud Univ. - Comput. Inf.
Sci., vol. 34, no. 4, pp. 1595-1623, 2022.
[5] Q. Li, Z. Wen, Z. Wu, S. Hu, N. Wang, X. Liu, and B. He, "A survey on federated learning systems:
Vision, hype and reality for data privacy and protection," IEEE Trans. Knowl. Data Eng., vol. 35, pp.
3347-3366, 2023.
[6] M. Kachuee, S. Fazeli, and M. Sarrafzadeh, "ECG heartbeat classification: A deep transferable
representation," presented at the IEEE International Conference on Healthcare Informatics, 2018, pp. 443-
444.
[7] R.F. Vitor, "Car trips data log," 2017. [Online]. Available:
https://www.kaggle.com/datasets/vitorrf/cartripsdatamining
[8] V. Tsoukas, E. Boumpa, G. Giannakas, and A. Kakarountas, "A review of machine learning and
TinyML in healthcare," in PCI ’21: Proc. 25th Pan-Hellenic Conference on Informatics, ACM, 2022, pp.
69-73.
[9] S.K. Lo, C.W. Q. Lu, H.-Y. Paik, and L. Zhu, "A systematic literature review on federated machine
learning: From a software engineering perspective," ACM Comput. Surv., vol. 54, no. 5, pp. 1-39, 2021.
[10] H. Ren, D. Anicic, and T. Runkler, "TinyOL: TinyML with online-learning on microcontrollers,"
presented at the International Joint Conference on Neural Networks (IJCNN), IEEE, 2021, pp. 1-8.
[11] C.R. Banbury, V.J. Reddi, M. Lam, et al., "Benchmarking tinyml systems: Challenges and
direction," 2020.
[12] R. David, J. Duke, A. Jain, et al., "TensorFlow Lite Micro: Embedded Machine Learning on
TinyML Systems," in Proceedings of the 4th Machine Learning and Systems (MLSys 2021), 2021.
[13] L. Lai, N. Suda, and V. Chandra, "CMSIS-NN: Efficient neural network kernels for arm Cortex-M
CPUs," 2018.
[14] Apache TVM Project, "https://tvm.apache.org/", 2021.
[15] L. Ravaglia, M. Rusci, D. Nadalini, A. Capotondi, F. Conti, and L. Benini, "A tinyml platform for
on-device continual learning with quantized latent replays," IEEE J. Emerg. Sel. Top. Circuits Syst.,
vol. 11, no. 4, pp. 789-802, 2021.
[16] J. Lin, W.-M. Chen, Y. Lin, J. Cohn, C. Gan, and S. Han, "Mcunet: Tiny deep learning on iot
devices," Adv. Neural Inf. Process. Syst., vol. 33, pp. 11711-11722, 2020.
[17] J. Montiel, M. Halford, S.M. Mastelini, G. Bolmier, R. Sourty, R. Vaysse, A. Zouitine, H.M.
Gomes, J. Read, T. Abdessalem, and A. Bifet, "River: machine learning for streaming data in Python,"
2020.
[18] K. Kopparapu, E. Lin, J.G. Breslin, and B. Sudharsan, "TinyFedTL: Federated transfer learning on
ubiquitous tiny IoT devices," in IEEE Int. Conf. on Pervasive Computing and Communications
Workshops and Other Affiliated Events, IEEE, 2022, pp. 79-81.
[19] M.M. Grau, R.P. Centelles, and F. Freitag, "On-device training of machine learning models on
microcontrollers with a look at federated learning," in Conf. on Information Technology for Social
Good, 2021, pp. 198-203.
[20] Y.D. Kwon, R. Li, S.I. Venieris, J. Chauhan, N.D. Lane, and C. Mascolo, "TinyTrain: Deep neural
network training at the extreme edge," 2023.