Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 1

2020-CS-433 2020-CS-434

2020-CS-445 2020-R/2019-CS-422

Mobile Edge Computing


It seems like we are working on a project related to network selection or optimization
in the context of visiting different points in a northern area. Given that you're utilizing
mobile edge computing and DQN, I'll assume you're aiming to develop a system that
autonomously selects the most suitable network at each point based on certain
criteria.

Here's a general outline of how you might approach this problem using DQN and MEC:

State Representation: Define the state space for your DQN agent. This could include
features such as signal strength, network latency, available bandwidth, and other relevant
network metrics at each point.

Action Space: Define the actions the agent can take, which in this case would be selecting
one of the available networks at each point.

Reward Design: Design a reward function that reflects the suitability of the selected network
at each point. The reward should encourage the agent to select networks that offer good
performance in terms of latency, throughput, reliability, etc.

Training Data Generation: Collect data from the visited points, including network
performance metrics and ground truth labels indicating which network was most suitable at
each point.

DQN Training: Train the DQN agent using the collected data. This involves iteratively
updating the Q-values based on the observed states, actions, and rewards.

Deployment and Evaluation: Deploy the trained DQN agent in a real-world setting and
evaluate its performance. Monitor how well it selects networks at each point compared to
manual selection or other baseline approaches.

Iterative Improvement: Continuously collect feedback and iterate on the model to improve
its performance over time. This might involve retraining the agent with updated data or
adjusting the reward function based on new insights.

You might also like