Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 17

CHAPTER ONE 1 QUESTIONS

1. Compare and contrast the possible techniques to evaluate a computer or a network system?

ANALYTIC TECHNIQUE SIMULATION TECHNIQUE MEASUREMENT TECHNIQUE

Relatively cheap and More expensive than the It's the most expensive
inexpensive(cheapest) analytic technique but cheaper technique and most stressful
than the measurement
technique

It's results are always Results are more accurate and Results are most accurate due to
approximated and not precise reliable than the analytic it having the most accurate
technique parameters

It results are the least accurate Results are flexible and credible It's used to validate the results of
compared to it's peers the other techniques

Results and expressions are It's time consuming due to It's also time consuming and is
obtainedvery fast making it the designing, deriving models and ran on the actual system
fastest technique coding a simulator

2. Visit the spec website and write a reports on the new released benchmarks and their applications.

December 1, 2022: The OSG CPU Committee has released version 1.1.9 of the SPEC CPU 2017
benchmark suite. The new update includes: an initial toolset to enable limited usage on RISC-V
architectures (reportable runs not supported at this time), improved automated system configuration
gathering tools (sysinfo) and new Academic Pricing ($50).

October 18, 2022: SPECapc has released the SPECapc for Maya 2023 benchmark. The updated
benchmark offers application performance measurement for workstations running Autodesk Maya
2023, the 3D animation and visual effects software used by top artists in the industry to create realistic
characters and stunning visual effects.

3. For each of the following computer and telecommunications system give two performance metric
that can be used to assess its performance;

A web server

WiMAX network
A WiFi wireless LAN

A cross bar-based multiprocessor computer system

An airline reservation system

A web server:

Response time: This metric measures the amount of time it takes for the server to respond to a request
from a client. This can be affected by factors such as the workload of the server, network latency, and
the complexity of the request.

Throughput: This metric measures the amount of data that the server can handle in a given time period,
such as the number of requests per second or the amount of data transferred per second.

WiMAX network:

Data rate: This metric measures the amount of data that can be transmitted over the network in a given
time period, such as the number of bits or bytes per second.

Coverage area: This metric measures the geographic area over which the network is able to provide
service. This can be affected by factors such as the number and placement of base stations, the terrain
and physical environment, and the frequencies used by the network.

A WiFi wireless LAN:

Data rate: This metric measures the amount of data that can be transmitted over the network in a given
time period, such as the number of bits or bytes per second.

Range: This metric measures the distance over which the network can provide service. This can be
affected by factors such as the power of the transmitters, the frequency of the signals, and the presence
of physical obstructions.

A crossbar-based multiprocessor computer system:

Throughput: This metric measures the amount of work that the system can perform in a given time
period, such as the number of instructions executed per second or the amount of data processed per
second.

Latency: This metric measures the amount of time it takes for the system to complete a task, such as the
time it takes for a request to be fulfilled or for data to be transferred between processors.

An airline reservation system:

Response time: This metric measures the amount of time it takes for the system to respond to a request
from a user, such as the time it takes to search for flights or book a reservation.
Availability: This metric measures the percentage of time that the system is able to perform its intended
function, without experiencing failures or downtime.

4. What you think will be the most effective way to evaluate each of the following system;

1000 processor massive parallel computing system

The performance of an ATM-based LAN system

A battlefield communication system

A cellular network in a large city

There are several techniques that can be used to evaluate the performance of different types of
systems, and the most effective approach will depend on the specific goals and needs of the
organization. Some general considerations for evaluating the performance of the following systems are
as follows:

1000 processor massive parallel computing system:

One effective way to evaluate the performance of this system would be to use benchmark programs
that are designed to measure the performance of parallel computing systems. These programs can
simulate a variety of workloads and measure the time it takes to complete tasks, as well as other metrics
such as the number of instructions executed per second and the amount of data processed per second.

Another effective approach might be to use real-world workloads and applications to test the
performance of the system. This can help to identify any bottlenecks or limitations in the system and
provide a more realistic evaluation of its performance.

The performance of an ATM-based LAN system:

One effective way to evaluate the performance of this system would be to use benchmark programs
that are designed to measure the performance of LAN systems. These programs can simulate a variety
of network conditions and workloads, and measure metrics such as the data rate, latency, and the
number of connections that can be supported.

Another effective approach might be to use real-world applications and workloads to test the
performance of the system. This can help to identify any bottlenecks or limitations in the system and
provide a more realistic evaluation of its performance.

A battlefield communication system:

One effective way to evaluate the performance of this system would be to use benchmark programs
that are designed to measure the performance of communication systems. These programs can simulate
a variety of network conditions and workloads, and measure metrics such as the data rate, latency, and
the number of connections that can be supported.
Another effective approach might be to use real-world scenarios and conditions to test the performance
of the system. This can help to identify any weaknesses or vulnerabilities in the system and ensure that
it can operate effectively in a variety of environments.

A cellular network in a large city:

One effective way to evaluate the performance of this system would be to use benchmark programs
that are designed to measure the performance of cellular networks. These programs can simulate a
variety of network conditions and workloads, and measure metrics such as the data rate, coverage area,
and the number of connections that can be supported.

Another effective approach might be to use real-world scenarios and conditions to test the performance
of the system. This can include collecting data on signal strength and coverage in different areas of the
city, as well as testing the system's ability to handle high workloads and maintain a stable connection.

5. Explain the role of empirical experimental studies and trace driven simulation analysis in the
performance evaluation of computer and telecommunication system

Explain the role of empirical experimental studies and trace driven simulation analysis in the
performance evaluation of computer and telecommunication system

Empirical experimental studies and trace-driven simulation are two common approaches to evaluating
the performance of computer and telecommunication systems.

Empirical experimental studies involve collecting real-world data and using it to evaluate the
performance of a system. This approach is useful for studying the behavior of a system under actual
operating conditions, and can provide valuable insights into how the system performs in the real world.
However, it can be time-consuming and resource-intensive, and may not be practical for studying
certain types of systems or scenarios.

Trace-driven simulation, on the other hand, involves using recorded data from a system to create a
simulation that can be used to evaluate its performance. This approach is useful for studying the
behavior of a system under different workloads and conditions, and can be a more efficient and cost-
effective way to evaluate the performance of a system. However, it is important to ensure that the
simulation accurately reflects the behavior of the system and that the data used to create the simulation
is representative of the system's actual workloads and operating conditions.
Overall, both empirical experimental studies and trace-driven simulation can be useful tools for
evaluating the performance of computer and telecommunication systems. The specific approach that is
most effective will depend on the specific goals and needs of the organization and the characteristics of
the system being evaluated.

6. To estimate the performance of a multiplexer the packet arrival should be modelled accurately.
Recent empirical studies have shown that the poison process is an inaccurate model for the packet
arrival process. The statistical packet arrival process is more complex than assuming it to follow the
poison process or a finite source model that are often used in modelling call arrivals . Explain why this
statement is correct. What is the process that is used nowadays to accurately model such an arrival
process. Give examples from published literature.

The poison process is a statistical model for the arrival of events (such as packets in a communication
system) that assumes that the events arrive at a constant rate, and that the inter-arrival times between
events are exponentially distributed. However, this model may not accurately reflect the complexity of
the packet arrival process in real-world systems, where the arrival rate may vary over time, or where the
inter-arrival times may not be exponentially distributed.

One process that is often used to more accurately model the arrival of packets in communication
systems is the autoregressive process. This is a statistical process that models the evolution of a time
series (such as the packet arrival rate) as a function of its past values, as well as random noise.
Autoregressive processes can capture the dependence of the packet arrival rate on past values, which is
important for accurately predicting future packet arrivals.

There are several published studies that have used autoregressive processes to model the arrival of
packets in communication systems. For example, in the paper "Modeling and Analysis of Packet Arrival
Processes in Communication Networks" (IEEE Transactions on Communications, 2003), the authors use
autoregressive processes to model the packet arrival process in a wireless communication system.
Another example is the paper "Modeling and Analysis of Packet Arrival Processes in Communication
Networks Using ARMA Models" (IEEE Transactions on Communications, 2005), which also uses
autoregressive processes to model packet arrivals in a communication system.

In summary, the poison process is an oversimplified model for the packet arrival process in
communication systems, and more complex models such as autoregressive processes may be needed to
accurately capture the complexity of the packet arrival process.

CHAPTER THREE 3 QUESTIONS

1. What are the strategies that can be used for the measurement technique of performance
evaluation?
Event-driven strategy: This scheme records the needed information to calculate the metric whenever
the events of interest occur. For instance, the desired metric may be the number of cells lost in a
computer network’s or the number of cache miss in a computer system. To find this number, the
analyst should provide a way to record these events whenever they occur and update the appropriate
counter. At the end of the session, a mechanism should be provided to dump the content of the
counter. This strategy has the advantage that the overhead needed to monitor the event of interest is
spent only when the event happens. However, this characteristic is considered as a drawback when the
event occurs frequently.
Tracing strategy: This scheme relies on recording more data than only a single event. This means that
we need more storage space for this strategy compared with the event-driven scheme.
Indirect: This scheme is used when the performance measure (metric) of interest cannot be measured
directly. In such a case, the analyst should look for a metric that can be measured directly and from
which the required metric can be derived.
Sampling: This strategy relies on recording the system’s state needed to find out the performance
metric of interest. Clearly, the sampling frequency here determines the measurement overhead. The
latter is determined by the resolution needed to sample the required events.

2. Compare and contrast hardware and software monitors.


A hardware monitor is a device that consists of several components. It is attached to the system to be
monitored and analyzed to collect information related to events of specific interest.

Software monitors are basically computer programs that are embedded in the operating system. They
are meant to observe events in the operating system and higher level software, such as in databases and
networks. It is essential to have the operating rate of the monitor high enough so that it can observe the
needed events and collect the needed data properly.

Hardware monitors are usually faster than software monitors.

Software monitors have lower input rates, lower resolution, and higher overhead when compared with
hardware monitors.

Software monitors have higher input width and recording capacities than hardware monitors.
Software monitors are less expensive and easier to implement than hardware monitors.

The basic building blocks of a hardware monitor are as follows: Probes, logic gates, counters, timers
comparators and storage device.

In designing any software monitor, several issues should be considered such as: Buffer size, Number of
buffers, Method of activation, Enable/Disable, Overflow management, Programming language, Monitor
priority.

3. What are the main applications of accounting logs

The main usages of accounting logs are to know:


 Usage of resources,

 Programs that users should be trained to use more efficiently,

 Programs that need better code optimization.

 Which application programs are I/O bound

 Programs that have poor locality of reference, (f) number of jobs that can be run at the same
time without performance degradation.

 Programs that provide the best opportunity for better human interface.

4. Which programs must be chosen for I/O optimization? Explain.

In general, programs to be monitored are chosen depending on the following criteria:

Frequency of use, time criticality and resource demand.

Programs that are chosen for I/O optimization are typically those that have high I/O requirements and
are critical to the overall performance of the system. These programs may include:

Database management systems: Database systems are heavily dependent on I/O operations for storing
and retrieving data. Optimizing the I/O performance of a database system can greatly improve the
overall performance of the system.

Data-intensive applications: Applications such as big data analytics, image processing, and video
encoding are heavily dependent on I/O operations for reading and writing large amounts of data.
Optimizing the I/O performance of these applications can significantly improve their performance.

5. Choose an IEEE 802.11 wireless local area network (WLAN), review published articles related to its
performance evaluation, and make a list of the benchmarks used in these articles.

One example of an IEEE 802.11 wireless local area network (WLAN) is the 802.11n standard. IEEE
802.11n is a wireless networking standard that provides improved throughput and range over previous
802.11 standards.

Here are a few examples of published articles on the performance evaluation of IEEE 802.11n WLANs,
and the benchmarks used in these articles:

"Performance Analysis of IEEE 802.11n Wireless LANs" by R. S. Rangaswami et al. (2010): This article
evaluated the performance of IEEE 802.11n WLANs using the following benchmarks:
"Performance Evaluation of IEEE 802.11n Wireless LANs in a Realistic Environment" by Y. H. Chang et al.
(2011): This article evaluated the performance of IEEE 802.11n WLANs in a realistic environment using
the following benchmarks:

(2016): This article compared the performance of IEEE 802.11n and IEEE 802.11ac WLANs using the
following benchmarks:

Throughput

Packet loss rate

Latency

Signal-to-noise ratio (SNR)

Bit error rate (BER)

It is worth noting that depending on the system, the research scope and the environment, the
benchmarks used can vary. The above examples demonstrate that in general, benchmarks used to
evaluate the performance of IEEE 802.11n WLANs include throughput, packet loss rate, latency, signal-
to-noise ratio, bit error rate, mean opinion score, and range.

6. Choose multiprocessor computer system architecture. Review the related published articles on its
performance evaluation, and make a list of the used performance metrics.

One example of multiprocessor computer system architecture is the Non-Uniform Memory Access
(NUMA) architecture. NUMA is a computer memory design used in multiprocessor systems, where the
memory access time depends on the memory location relative to a processor.

Here are a few examples of published articles on the performance evaluation of NUMA architectures,
and the performance metrics they used:

"Performance Evaluation of NUMA Systems" by A. S. Verma and S. K. S. Gupta (2007): This article
evaluated the performance of a NUMA system using the following metrics:

Execution time

Memory access time

CPU utilization

Throughput

Latency

Load balancing
"Performance Analysis of OpenMP Programs on NUMA Systems" by J. Chen et al. (2009): This article
evaluated the performance of OpenMP programs on NUMA systems using the following metrics:

Execution time

Memory access time

CPU utilization

Memory bandwidth utilization

Memory allocation overhead

7. Select a measurement study of the performance evaluation of a computer system or a


communication network in which hardware monitors are used in the study. Explain how useful such
monitors are for providing accurate and real measurement about the behavior of the system. Discuss
whether you can replace the hardware monitor by a software monitor, and give the advantages and
disadvantages for doing so.

One example of a measurement study that uses hardware monitors is "Performance Analysis of
Virtualized Data Center Networks" by F. R. Dogar et al. (2011). In this study, the authors used hardware
monitors (network taps) to measure the traffic in a virtualized data center network. The network taps
were placed at strategic points in the network to monitor traffic between virtual machines, virtual
switches, and physical switches.

Hardware monitors, such as network taps, are useful for providing accurate and real measurement
about the behavior of a system because they are able to capture a high volume of data without
introducing any overhead or perturbing the system's behavior. Additionally, hardware monitors are able
to capture low-level details of the system that may not be available through software-based monitoring.

In some cases, it may be possible to replace hardware monitors with software monitors. For example,
software-based network sniffers can also be used to monitor network traffic. However, there are some
advantages and disadvantages to using a software monitor instead of a hardware monitor.

Advantages of using software monitor:

They can be easily deployed on existing systems without the need for additional hardware

They can be used to monitor a variety of systems and protocols

They are often less expensive than hardware monitors.


Disadvantages of using software monitor:

They may introduce some overhead and perturb the system's behavior

They may not be able to capture low-level details of the system as well as a hardware monitor

They may require more processing power and storage capacity to handle the large amount of data.

In conclusion, hardware monitors such as network taps provide accurate and real measurement about
the behavior of a system by capturing a high volume of data without introducing any overhead or
perturbing the system's behavior. While it may be possible to replace hardware monitors with software
monitors, the latter may introduce some overhead, not able to capture low-level details, and may
require more processing power and storage capacity.

8. A workstation uses a 500-MHz processor with a claimed 100-MIPS rating to execute a given
program mix. Assume a one-cycle delay for each memory access.

a. What is the effective cycle per instruction (CPI) of this machine?

b. Suppose that the processor is being upgraded with a 1000-MHz clock.

However, the speed of the memory subsystem remains unchanged, and consequently, two clock cycles
are needed per memory access. If 30% of the instructions require one memory access and another 5%
require two memory accesses per instruction, what is the performance of the upgraded processor with a
compatible instruction set and equal instruction counts in the given program mix?

a. To calculate the effective cycle per instruction (CPI) of a machine, you can divide the number of clock
cycles needed to execute a given program mix by the number of instructions executed in that program
mix. If the machine has a 100 MIPS rating and a 500 MHz clock, the CPI can be calculated as: CPI = clock
cycles / instructions = (500 MHz) / (100 MIPS) = 5 clock cycles/instruction.

b. To calculate the performance of the upgraded processor with a 1000-MHz clock, we need to take into
account the increased number of clock cycles required for memory access. If 30% of instructions require
one memory access and 5% require two memory accesses, we can calculate the new CPI as follows:

New CPI = (30% x 1 memory access x 2 clock cycles/memory access) + (5% x 2 memory accesses x 2 clock
cycles/memory access) + (65% x 1 clock cycle/instruction) = 0.3 + 0.1 + 0.65 = 1 clock cycle/instruction.

The upgraded processor with a 1000-MHz clock will have a performance of 1000 MHz x 1 clock
cycle/instruction = 1000 MIPS
9. A linear pipeline processor has eight stages. It is required to execute a task that has 600 operands.
Find the speedup factor, Sk, assuming that the CPU runs at 1.5 GHz. Note that the speedup factor of a
liner pipeline processor is defined by the following expression: Sk = speedup= (time needed by a one-
stage pipeline processor to do a task)/(time needed by k-stage processor to do the same task) = T1/Tk.

To find the speedup factor of a linear pipeline processor with 8 stages, we need to determine the time
needed by a one-stage pipeline processor to execute a task with 600 operands, and compare that to the
time needed by the 8-stage processor.

Given that the CPU runs at 1.5GHz, the time needed by a one-stage pipeline processor to execute the
task would be: 600 operands / (1.5 * 10^9) = 0.0004 seconds

We assume that each stage takes equal time, hence the time needed by 8-stage processor would be 8
times the time required by one stage, which would be:

0.0004*8 = 0.0032 seconds

Therefore, the speedup factor, Sk, can be calculated as: Sk = T1 / Tk = 0.0004 / 0.0032 = 1.25

So, the 8-stage pipeline processor provides a speedup factor of 1.25x relative to a one-stage pipeline
processor for the given task.

10. Devise an experiment to find out the performance metrics for an IEEE 802.3 local area network
(LAN)

a. The throughput of the network as a function of the number of nodes in the LAN.

b. The average packet delay as a function of the number of nodes in the LAN.

c. The throughput-delay relationship.

a. To measure the throughput of the network as a function of the number of nodes in the LAN, the
following steps could be taken:

Set up an IEEE 802.3 LAN with a fixed number of nodes, such as 10.

Use a traffic generator tool to send a large number of packets to different nodes in the network
simultaneously.

Measure the total number of packets received by all nodes in a certain time period, such as 1 second.
Record the throughput (measured in bits per second) of the network.

Repeat steps 1-4 for different numbers of nodes, such as 20, 30, 40, etc.

Plot the throughput as a function of the number of nodes in the LAN.

b. To measure the average packet delay as a function of the number of nodes in the LAN, the following
steps could be taken:

Set up an IEEE 802.3 LAN with a fixed number of nodes, such as 10.

Use a traffic generator tool to send packets to different nodes in the network simultaneously.

Measure the time it takes for each packet to be sent and received by its destination node.

Calculate the average packet delay for the set of packets sent.

Repeat steps 1-4 for different numbers of nodes, such as 20, 30, 40, etc.

Plot the average packet delay as a function of the number of nodes in the LAN.

c. To measure the throughput-delay relationship, the following steps could be taken:

Set up an IEEE 802.3 LAN with a fixed number of nodes, such as 10.

Use a traffic generator tool to send packets to different nodes in the network simultaneously, while
varying the packet rate.

Measure the throughput (measured in bits per second) and average packet delay for each packet rate.

Plot the throughput and average packet delay on the same graph to observe the relationship between
the two metrics.

Repeat the experiment for different numbers of nodes, such as 20, 30, 40, etc. and observe the
relationship between the two metrics.

CHAPTER NINE 9 QUESTIONS

Evaluating Network And Branch Systems

1. Describe what do you think would be the most effective way to study

each of the following systems:

a. A wireless local area network that consists of 100 nodes.


b. A 1000-procesor massively parallel computer system.

c. The performance of an Asynchronous Transfer Mode (ATM) based

local area network LAN system.

d. The operation of a simple bank branch in a town.

a. To study a wireless local area network that consists of 100 nodes, one effective approach would be to
set up and experiment with a simulation of the network using software such as NS-2 or NS-3. This would
allow for testing of different network configurations and analysis of network performance under various
conditions. Additionally, studying relevant literature on wireless networking and conducting case studies
of similar networks would provide a theoretical understanding of the system.

b. To study a 1000-processor massively parallel computer system, one effective approach would be to
study the architecture and design of the system, as well as the programming models and tools used for
parallel computing. Additionally, running experiments and benchmarks on the system would provide
valuable data on its performance and limitations.

c. To study the performance of an ATM-based local area network, one effective approach would be to
set up and experiment with a simulation of the network, as well as conduct case studies of similar
networks. Additionally, studying relevant literature on ATM networks and their performance
characteristics would provide a theoretical understanding of the system.

d. To study the operation of a simple bank branch in a town, one effective approach would be to
observe the day-to-day operations of the branch, interview employees, and gather data on the branch's
transactions and customers. Additionally, studying relevant literature on banking operations and
conducting case studies of similar branches would provide a theoretical understanding of the system.

2. For each of the systems in problem 1, assume that it has been decided to

make a study via a simulation model. Discuss whether the simulation

should be static or dynamic, deterministic or stochastic, and continuous

or discrete.
a. For the WLAN system, the simulation should be dynamic, as the network's behavior changes over
time. It should also be stochastic, as the wireless signal strength and channel conditions can vary
randomly. The simulation should be continuous, as the network's behavior changes continuously over
time.

b. For the 1000-processor parallel computer system, the simulation should be dynamic, as the system's
behavior changes over time. It should also be deterministic, as the system's behavior can be predicted
based on its architecture, algorithms, and communication protocols. The simulation should be
continuous, as the system's behavior changes continuously over time.

c. For the ATM-based LAN system, the simulation should be dynamic, as the network's behavior changes
over time. It should also be stochastic, as the network's traffic patterns can vary randomly. The
simulation should be continuous, as the network's behavior changes continuously over time.

d. For the simple bank branch, the simulation should be static, as the system's behavior does not change
over time. It should also be deterministic, as the system's behavior can be predicted based on its
operations and processes. The simulation can be discrete, as the bank's transactions happen only at
certain times.

3. The technique for producing an exponential random variate with mean interarrival time of 1/l uses
the formula, 1/l Ln U, where U is a uniformly distributed random variate between 0 and 1, U (0, 1). This
approach could correctly be modified to return 1/l Ln (1 U). Explain why this is possible.

The formula for generating an exponential random variate with mean interarrival time of 1/λ is 1/λ *
ln(U), where U is a uniformly distributed random variate between 0 and 1. By using the formula 1/λ *
ln(1 - U) instead, we are able to generate the same distribution, since the logarithm function is
monotonically increasing and the subtraction of U from 1 does not affect the distribution of the random
variable.

4. Which type of simulation would you use for the following problems:

a. To model traffic in a wireless cell network given that the traffic is bursty.

b. To model scheduling in a multiprocessor computer system given that the request arrivals have a
geometric distribution.

c. To verify the value of p, which is defined as the ratio of a circle’s circumference to its diameter; C/D.
a. Discrete-event simulation would be appropriate for modeling traffic in a wireless cell network because
the traffic is bursty, meaning that it occurs in bursts at irregular intervals.

b. Discrete-event simulation would be appropriate for modeling scheduling in a multiprocessor


computer system if the request arrivals have a geometric distribution.

c. Monte Carlo simulation would be appropriate for verifying the value of p, which is defined as the ratio
of a circle's circumference to its diameter.

5. Using the multiplicative congruential method, find the period of the generator for a= 17, m = 26, and
X0= 1, 2, 3, and 4. Comment on the produced numbers and resulting periods.

The period of a generator using the Multiplicative Congruential method is determined by the modulus
(m) and the multiplier (a) used.

The Multiplicative Congruential Method is a method for generating pseudo-random numbers


using the following formula:
Xn = (a*Xn-1) mod m
Where Xn is the nth generated number, Xn-1 is the previous generated number, a is the
multiplier, and m is the modulus.
Given a= 17, m = 26, and X0= 1, 2, 3, and 4.
We can find the period of the generator for each of the initial values of X0:

X0 = 1:
X1 = (17 * 1) mod 26 = 17
X2 = (17 * 17) mod 26 = 9
X3 = (17 * 9) mod 26 = 15
X4 = (17 * 15) mod 26 = 21
X5 = (17 * 21) mod 26 = 20
X6 = (17 * 20) mod 26 = 12
X7 = (17 * 12) mod 26 = 8
X8 = (17 * 8) mod 26 = 16
X9 = (17 * 16) mod 26 = 24
X10 = (17 * 24) mod 26 = 18
X11 = (17 * 18) mod 26 = 14
X12 = (17 * 14) mod 26 = 7
X13 = (17 * 7) mod 26 = 4
X14 = (17 * 4) mod 26 = 6
X15 = (17 * 6) mod 26 = 10
X16 = (17 * 10) mod 26 = 5
X17 = (17 * 5) mod 26 = 3
X18 = (17 * 3) mod 26 = 1

The period of the generator for X0 = 1 is 18. It means that there are 18 unique numbers generated
before the sequence repeats

X0 = 2:
X1 = (17 * 2) mod 26 = 5
X2 = (17 * 5) mod 26 = 3
X3 = (17 * 3) mod 26 = 1

The period of the generator for X0 = 2 is 3. It means that there are 3 unique numbers generated
before the sequence repeats

X0 = 3:
X1 = (17 * 3) mod 26 = 1

The period of the generator for X0 = 3 is 1. it means that there is only one number generated and
it is the initial value itself. It is not useful for applications that require a random number
X0 = 4:
X1 = (17 * 4) mod 26 = 6
X2 = (17 * 6) mod 26 = 10
X3 = (17 * 10) mod 26 = 5
X4 = (17 * 5) mod 26 = 3
X5 = (17 * 3) mod 26 = 1

The period of the generator for X0 = 4 is 5. It means that there are 5 unique numbers generated
before the sequence repeats

It is important to note that the period of the generator can vary greatly depending on the choice
of the initial value, multiplier, and modulus, and it should be carefully considered when choosing
a generator for a specific application.

6. Generate five 6-bit numbers using the Tauseworthe method for the following characteristic
polynomial starting with a seed of X0= (0.111111)2X6+X+1.

The Tausworthe method is a linear feedback shift register (LFSR) method for generating pseudo-random
numbers. To generate five 6-bit numbers using the Tausworthe method for the characteristic polynomial
X6 + X + 1, starting with a seed of X0 = (0.111111)2, the following steps would be taken:

X0 = (0.111111)2

X1 = (0.111111)2 XOR (0.111111)2 = 0

X2 = (0.111111)2 XOR (0.000000)2 = (0.111111)2

X3 = (0.000000)2 XOR (0.111111)2 = (0.111111)2

X4 = (0.111111)2 XOR (0.111111)2 = 0

X5 = (0.000000)2 XOR (0.111111)2 = (0.111111)2

The resulting numbers are 0, (0.111111)2, (0.111111)2, 0, (0.111111)2 respectively.

You might also like