Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

SCIENTIFIC MODELING &

SIMULATION
Submitted To: Sir Asif Jamali
Submitted By: Shazia Solangi
Roll No: 2K17/LCS/42
Date: 2/ 2 /2021
Day: Tuesday
Department: Computer Science

University Of Sindh
WEEK 9
Basic Probality & Statistics:
Probability is the study of random events. It is used in analyzing games of
chance, genetics, weather prediction, and a myriad of other everyday
events. Statistics is the mathematics we use to collect, organizes, and
interpret numerical data.

What is variances & correclations:


The strength of the relationship between X and Y is sometimes expressed by
squaring the correlation coefficient and multiplying by 100. The resulting
statistic is known as variance explained (or R2). Example: a correlation of 0.5
means 0.52x100 = 25% of the variance in Y is "explained" or predicted by the
X variable.

WEEK 10
Confidence Intervals & Hypothesis:
Confidence intervals use data from a sample to estimate a population
parameter. Hypothesis tests use data from a sample to test a
specified hypothesis. Hypothesis testing requires that we have a
hypothesized parameter.

What is the mean of a test?


The mean is calculated by adding all the numbers together and dividing by
the total number of numbers. For example, let's take a set of test scores from
a math class of nine students. The scores were: 65, 95, 73, 88, 83, 92, 74, 83,
and 94. To find the mean, add all these scores together.
The law of large numbers (LLN):
The law of large numbers, in probability and statistics, states that as a
sample size grows, its mean gets closer to the average of the whole
population. In the 16th century, mathematician Gerolama Cardano recognized
the Law of Large Numbers but never proved it.

Random Generate numbers:


Random number generation is a process which, through a device, generates a sequence of
numbers or symbols that cannot be reasonably predicted better than by a random chance.

WEEK 12
Non Stationary Poisson Processes:
A no stationary Poisson process satisfies the independent-increments
property: number of arrivals. in no overlapping intervals are independent. Pr
{Zτ+∆τ − Zτ = m Zτ = k} = Pr{Zτ+∆τ − Zτ = m} = e−[Λ(τ+∆τ)−Λ(τ)] [Λ(τ + ∆τ) −
Λ(τ)]

Batch Arrivals:
Abstract Queues that feature multiple entities arriving simultaneously are
among the oldest models in queueing theory, and are often referred to as
“batch” (or, in some cases, “bulk”) arrival queueing systems. In this work we
study the affect of Batch arrivals on infinite server queues.

Test on generators:
Test generator is software used to create tests for a variety of uses. An
example of test generator is a middle school science teacher using software
to create a mid-term exam for his students.
Marco-Monte-Carlo Simulation:
In statistics, Markov chain Monte Carlo (MCMC) methods comprise a class
of algorithms for sampling from a probability distribution. By constructing
a Markov chain that has the desired distribution as its equilibrium
distribution, one can obtain a sample of the desired distribution by
recording states from the chain. The more steps are included, the more
closely the distribution of the sample matches the actual desired
distribution. 

Variance reduction techniques:


Variance reduction techniques reduce Mean Standard Error by
decreasing Variance in the numerator of Equation (C. 1) and can be used to
speed up simulations by achieving a specified level of precision with a smaller
number of Trials. In this appendix, we consider two variance reduction
techniques, AV and CV.

WEEK 13
Model Validation:
Model validation is defined within regulatory guidance as “the set of processes
and activities intended to verify that models are performing as expected, in
line with their design objectives, and business uses.” It also identifies
“potential limitations and assumptions, and assesses their possible impact.

Hypothesis:
A hypothesis is a suggested solution for an unexplained occurrence that does
not fit into current accepted scientific theory. The basic idea of a hypothesis is
that there is no pre-determined outcome.
Theories and laws:
Scientific laws describe phenomena that the scientific community has found to
be provably true. Generally, laws describe what will happen in a given
situation as demonstrable by a mathematical equation where
as theories describe how the phenomenon happens.

Inductivism:
Inductivism is an approach to logic whereby scientific laws are inferred from
particular facts or observational evidence. This approach can also be
applied to theory-building in the social sciences, with theory being inferred
by reasoning from particular facts to general principles.

Deductivism:
Deductivism is an attempt to develop a position that avoids the difficulties that
beset inductivism. It is accepted that theoretical elements enter science at all
stages and that inductive generalizations lack proper justification.

Modeling a scientific process:


It is more important to organize the parts in a way that makes sense to you
than to list them sequentially. There is a single instrument to measure climate
change.
WEEK 14
Validity Concept:
The concept of validity was formulated by Kelly (1927, p. 14) who stated that a
test is valid if it measures what it claims to measure. For example a test of
intelligence should measure intelligence and not something else (such as
memory). A distinction can be made between internal and external validity.

Validity Methodlogy:
Validity refers to how accurately a method measures what it is intended to
measure. If research has high validity that means it produces results that
correspond to real properties, characteristics, and variations in the physical or
social world. High reliability is one indicator that a measurement is valid.

Validity of data:
Data validation is the process of ensuring data has undergone data ... Data
validation is intended to provide certain well-defined guarantees for fitness
and consistency of data in an application or automated system.

Validity of Inference:
Inference validity refers to the validity of a research design as a whole. It
refers to whether you can trust the conclusions of a study. . Statistical
measures show relationships, but it is the theory and the study design that
affect what kinds of claims to causality you can reasonably make.

Validation vs verification:
Verification is a theoretical exercise designed to make sure that no
requirements are missed in the design, whereas validation is a practical
exercise that ensures that the product, as built, will function to meet the
requirements.

You might also like