Professional Documents
Culture Documents
Exp2 Shruti C3
Exp2 Shruti C3
Grade: AA / AB / BB / BC / CC / CD /DD
www.isr.uci.edu/projects/archstudio/setup-easy.html
______________________________________________________________________
Theory:
● Architecture Model
An artifact documenting some or all of the architectural design decisions about
a system
● Architecture Visualization
A way of depicting some or all of the architectural design decisions about a
system to a stakeholder
● Architecture View
A subset of related architectural design decisions
● Architectural Processes:
• Architectural design
Implementation
Abstract of Project:
Personalised finance app employing federated learning for services like loan risk
assessment and credit scoring. Upholding data privacy, our app empowers users to
securely share data with financial institutions, ensuring computations are conducted
locally. Unveiling a new era of transparency, it grants users insights into data
utilization, fosters trust and informed decisions. It also helps users understand loan
related risks and their credit score, all of which is computed in a federated environment
Architecture of Project:
1. Choose a model on central server that is either pre-trained or isn’t trained at all.
2. The next step would be the distribution of the initial model to the clients
(devices or local servers).
3. Each client keeps training it on-site using its own local data. The important part
is that this training data can be confidential, including personal photos, emails,
health metrics, or chat logs. Collecting such information in cloud-based
environments could be problematic or simply impossible.
4. When locally trained, the updated models are sent back to the central server via
encrypted communication channels. Worth noting that the server doesn’t get
any real data, but only trained parameters of the model (e.g., weights in neural
networks).
5. Updates from all clients are averaged and aggregated into a single shared
model, improving its accuracy.
6. Finally, this model is sent back to all devices and servers.
1. Server: The server serves as the central coordinator of the federated learning process.
It manages the entire training workflow by initiating training rounds, distributing the
initial global model to selected clients, collecting and aggregating model updates, and
overseeing synchronization of learning process. Server ensures that the collaboration
among clients leads to a refined global model while maintaining data privacy.
2. Client and Client Driver: Clients are individual entities or devices that participate in
federated learning. Each client holds its own local dataset, representing data from its
specific environment. The client driver, situated on the server's side, is responsible for
selecting a subset of clients for each training round. It communicates with these chosen
clients, providing them with the current global model and collecting their model
updates after they've trained on their local data. The client driver facilitates the
interaction between the server and individual clients, enabling them to contribute their
insights to the global model.
1. Local Train: This step involves each individual client training its local model using
its own local dataset. Clients execute training algorithms on their data to update their
models according to the specific patterns in their data.
2. Local Message: After local training, clients generate a message that contains the
updates made to their local models. This message typically includes information about
the model's weights or gradients that were adjusted during training.
3. Train Message: The train message is sent by each client to the server. It carries the
model updates resulting from the local training process. These updates are then
aggregated to create an improved global model.
4. Local Model Push: Once train messages are sent to server, clients push their locally
trained models (updated versions) to server. These models might be used in aggregation
5. Aggregate Message: The server collects the train messages from all participating
clients. This step involves the aggregation of these individual model updates to create a
new version of the global model.
6. Aggregation Model Push: After the aggregation process is complete, the server
pushes the newly aggregated global model to all participating clients. This updated
global model incorporates the insights from all clients' local models.
7. Aggregation Finish Message: This message is sent by the server to inform the clients
that the aggregation process has been successfully completed. It indicates that the
clients can now update their models with the aggregated global model.
8. Aggregated Model Webhook: Upon receiving the aggregation finish message, clients
can implement a webhook to automatically retrieve the aggregated global model. A
webhook is a way for one system to provide information to another system in real-time.
2. Call and Return (Layered) : The call and return architectural style, also
known as the layered architectural style, involves organizing a system into
multiple layers or tiers. Each layer represents a specific level of abstraction and
functionality. Higher layers depend on the services provided by lower layers,
but lower layers are usually unaware of the specifics of the layers above them.
This separation of concerns helps in modularizing the system and promoting
maintainability, as changes in one layer typically don't affect others. This style
is commonly used in web applications with a frontend-backend separation or in
operating systems with kernel-user space divisions.
3. Apply any two standard styles for your B.E Project and justify.
Justifications:
Justifications:
components for clients, server, aggregators, data pre-processors, and more. This
modular design enables individual components to be developed, tested, and maintained
independently, enhancing reusability and system flexibility.
9. Testing and Maintenance: The modular nature of C2 supports ease of testing and
maintenance. Components can be tested independently, and updates can be applied to
specific components without affecting the entire system.