Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

TNE40001-Broadband Multimedia Networks-H1

TUTORIAL 5
103137176

1. The three main sources of latency along a IP network path are transmission delay, serialization delay
and queuing delay. What are they, how do they differ, and how do they combine to form the user's
experience of network latency?

 Transmission delay: The time it takes for a packet to travel across a link, which is determined by
the distance and the speed of transmission/light. Can be calculated by simple formaula=
distance/time. Transmission delays are constant and cannot be changed.
 Serialization delay: The time it takes to leave a device and get into the packet. This is a constant
value. This depends on the speed and size of packet.
 Queuing delay: the time it takes for a packet to wait in a queue before it can be transmitted,
which is determined by the amount of traffic on the link and the available bandwidth. We have
buffers in routers, but we do have buffers in the switch as well.

All the above delays [apart from TCP delay which is considered at application level] add to user's
experience of network latency- I.e., is the time it takes for a packet to travel from the sender to the
receiver and back again. Latency affects user experience -especially when dealing with real-time
applications like video conferencing, online gaming etc.

2. When calculating the contribution of serialization delay to total delay, why do we only count
serialization and the sending end or the receiving end of a link, but not both?
Serialization delay is the time it takes to convert the packet from bits to signal that can be transmitted
across a link. It depends on data rate and stays the same at any point in the link [ and it is symmetric]
Hence, we calculate serialization delay at either sending or receiving end of the link [considering which
end can be a bottleneck] or else we would be double counting the delay.

3. Your home LAN runs at 100Mbit/sec and connects to the Internet via a cable modem service with
128Kbit/sec uplink and 5Mbit/sec downlink. Two different PCs are sharing the cable modem service.
Where does congestion primarily occur and why?

For the above consideration we have a greater download speed than upload the congestion will occur at
the uplink [from cable to internet]. In a scenario where both PC’s are sending information at the same
time, the link is overloaded/ there is not enough bandwidth to accumulate requests sent by both the
PC’s leading to queuing delays and high latency.
4. Some people say that larger queues in home broadband gateways help reduce packet loss. Why might
this be true? What sort of impact would larger queues have on end-to-end latency? Would short queues
or no queues be better?

With long queues more data packets can be added to the queue and traffic can be sent/handled
effectively. With larger queues packets get buffered in the queue but will not get lost hence deleting the
possibility of packet loss and re-transmission. However, this increasing queuing delays adds up to RTT,
which affects user experience and end-to-end latency. This is not an ideal scenario with applications
involving live streaming. Ideal for applications where reliability is more important (such as file transfers
or video streaming).

On the contrary, short queues have reduced latency in comparison but have high chances of packet loss.
Packets can be immediately transmitted or can be dropped and retransmitted. This reduces the
throughput and network performance. Ideal for applications where low latency is critical (such as online
gaming or video conferencing).

5. Assume two hosts, X and Y are connected by a link of 5Mbps and are separated by
2000kms. Assume that the propagation speed over the link is 2.0e8 m/s. What is the
bandwidth delay product?
Bandwidth= 5Mbps
RTT delay = distance / speed
= 2000km / 2.0e8 m/s

= 0.01s
Bandwidth delay product = bandwidth * RTT
= 5Mbps * 0.02s
= 100,000 bits
6. What is the difference between the message sequence and bandwidth load in a client-
server model and peer-to-peer model?

The main difference between a client-server model and a peer-to-peer model is the
centralized vs. decentralized nature of the communication. In a client-server model,
the server is responsible for processing and responding to all client requests,
leading to higher bandwidth load on the server side. In a peer-to-peer model, the
message sequence is synchronous and follows a request-response pattern, while in
a client-server model, the message sequence is synchronous and follows a request-
response pattern.
7. In client-server model game

a) What is the role of a command packet?


A command packet is used to transmit information about the player's input or
actions to the server, allowing the server to update the game state accordingly.

b) What is the role of a snapshot packet?


A snapshot packet is used by the server to send the current state of the game to
the client, allowing the client to render the game world locally.

c) Suppose the time-step is 33.33ms, what is the tickrate (ticks per second)?
tick rate = 1 / 0.0333 = 30.03.

d) What is the relationship between a game server's snapshot transmission and tickrate?
The game server's snapshot transmission rate is directly tied to the tickrate, as each
snapshot represents the current state of the game at a particular tick. A higher
tickrate requires more frequent snapshots to be transmitted to the clients in order
to maintain smooth gameplay.

e) What is the maximum snapshot transmissions that can occur? In reality does this
actually occur?
The maximum number of snapshot transmissions that can occur is determined by
the server's update rate and the available network bandwidth. In reality, it is often
difficult to achieve the maximum number of snapshots due to network latency and
other factors.

f) Suppose the tickrate is 66.66 ticks per sec, what is the time-step?
time-step= 1 / 66.66 = 0.015.

g) Assuming that there is enough available bandwidth on both nodes (server and client)
and that the both nodes have enough CPU power, etc. Out of the two answers (7c and
7f) which one is more preferable? Why?
A higher tickrate is generally preferable for smoother gameplay, as it allows for
more frequent updates of the game state. However, this must be balanced against
the available bandwidth and CPU resources, as a higher tickrate requires more
frequent snapshot transmissions and may place a greater load on the server and
clients.

8. Assume a client-server scenario. Further assume that the server sends 300-byte
update packets to each client every 50ms, and the server is connected to the Internet by
a 1Mbps link.
a) What is the maximum number of clients who can play on this server? (Make
reasonable simplifying assumptions.)

maximum number of clients = Total bandwidth of the link/bandwidth consumed by a


single client
bandwidth consumed by a single client =300 bytes / 50ms
= 6,000 bytes/s
= 48,000 bits/s
maximum number of clients = 1000000 bits/s / 48,000 bits/s = 20.83
Approx 20

b) If instead we want the server to support 25 clients, with updates sent every 50ms,
what is the maximum allowable size for each update packet (in bytes)?

Therefore, the maximum allowable size of each update packet= 200 bytes

(1Mbps / 25 clients / 50ms * 1000ms/1s * 1s/1024 bytes).

9. Each node (peer, client, or server) has an ADSL service with 2Mbps downstream and
512Kbps upstream. a) A peer-to-peer game involves N players, every node is sending
100 byte IP packets every 20ms. What is the highest number of players this game can
have?
To calculate the maximum number of players, we first need to calculate the bandwidth
used by each player. Each node is sending 100 bytes every 20ms, which is equivalent to
40,000 bytes per second (100 bytes/20ms * 1000ms/1s * 1s/1024 bytes). With a
downstream bandwidth of 2Mbps, the maximum number of players is 50
(2Mbps/40,000 bytes per second).
b) A client-server game involves N players, the server is sending a 300 byte update
packets to the client every 20ms. Assume that the RTT between the server and all clients
is the same.

i. What would be the highest number of players this game could have?
To calculate the maximum number of players, we need to calculate the total bandwidth
used by the server. Each client receives a 300-byte update packet every 20ms, which is
equivalent to 120,000 bytes per second (300 bytes/20ms * 1000ms/1s * 1s/1024
bytes). If we assume that the RTT is negligible, the maximum number of players the
server can support is 8 (2Mbps/120,000 bytes per second).

ii. How much time discrepancy exists between the start of the first client's update packet
and the start of the fourth client's update packet? Assuming that the RTT is negligible
and that the server sends out the update packets at regular intervals, the time
discrepancy between the start of the first client's update packet and the start of the
fourth client's update packet would be 60ms (3 * 20ms). This is because the server sends
out each update packet every 20ms, and there is a delay of 20ms between each packet.
iii. If we capped the number of players to four, and increased the upstream link speed to
5Mbps, what effect does this have on the time discrepancy answer in 9bii?

iii. If we cap the number of players to four and increase the upstream link speed to
5Mbps, the time discrepancy between the start of the first client's update packet and the
start of the fourth client's update packet would be reduced. Assuming that the RTT is
negligible, the time discrepancy would be 30ms (3 * 10ms), because the server can send
out the update packets more quickly with the increased upstream bandwidth.

10. What would we like to have to reduce game latency?


To reduce game latency, the below can be done.
1. reduce the packet size. The smaller the packets are the smaller serialisation delays.
2. Reduce queuing delays by not sending packets quite often ie sending less packets/sec.
3.Use compression to reduce packet size
4.Reduce traffic by sending data that only concerns the player in the world.
5.Reduce RTT for better user experiecne and in game latency

You might also like