Professional Documents
Culture Documents
Scientific Modeling
Scientific Modeling
SIMULATION
Submitted To: Sir Asif Jamali
Submitted By: Shazia Solangi
Roll No: 2K17/LCS/42
Date: 2/ 2 /2021
Day: Tuesday
Department: Computer Science
University Of Sindh
WEEK 9
Basic Probality & Statistics:
Probability is the study of random events. It is used in analyzing games of
chance, genetics, weather prediction, and a myriad of other everyday
events. Statistics is the mathematics we use to collect, organizes, and
interpret numerical data.
WEEK 10
Confidence Intervals & Hypothesis:
Confidence intervals use data from a sample to estimate a population
parameter. Hypothesis tests use data from a sample to test a
specified hypothesis. Hypothesis testing requires that we have a
hypothesized parameter.
WEEK 12
Non Stationary Poisson Processes:
A no stationary Poisson process satisfies the independent-increments
property: number of arrivals. in no overlapping intervals are independent. Pr
{Zτ+∆τ − Zτ = m Zτ = k} = Pr{Zτ+∆τ − Zτ = m} = e−[Λ(τ+∆τ)−Λ(τ)] [Λ(τ + ∆τ) −
Λ(τ)]
Batch Arrivals:
Abstract Queues that feature multiple entities arriving simultaneously are
among the oldest models in queueing theory, and are often referred to as
“batch” (or, in some cases, “bulk”) arrival queueing systems. In this work we
study the affect of Batch arrivals on infinite server queues.
Test on generators:
Test generator is software used to create tests for a variety of uses. An
example of test generator is a middle school science teacher using software
to create a mid-term exam for his students.
Marco-Monte-Carlo Simulation:
In statistics, Markov chain Monte Carlo (MCMC) methods comprise a class
of algorithms for sampling from a probability distribution. By constructing
a Markov chain that has the desired distribution as its equilibrium
distribution, one can obtain a sample of the desired distribution by
recording states from the chain. The more steps are included, the more
closely the distribution of the sample matches the actual desired
distribution.
WEEK 13
Model Validation:
Model validation is defined within regulatory guidance as “the set of processes
and activities intended to verify that models are performing as expected, in
line with their design objectives, and business uses.” It also identifies
“potential limitations and assumptions, and assesses their possible impact.
Hypothesis:
A hypothesis is a suggested solution for an unexplained occurrence that does
not fit into current accepted scientific theory. The basic idea of a hypothesis is
that there is no pre-determined outcome.
Theories and laws:
Scientific laws describe phenomena that the scientific community has found to
be provably true. Generally, laws describe what will happen in a given
situation as demonstrable by a mathematical equation where
as theories describe how the phenomenon happens.
Inductivism:
Inductivism is an approach to logic whereby scientific laws are inferred from
particular facts or observational evidence. This approach can also be
applied to theory-building in the social sciences, with theory being inferred
by reasoning from particular facts to general principles.
Deductivism:
Deductivism is an attempt to develop a position that avoids the difficulties that
beset inductivism. It is accepted that theoretical elements enter science at all
stages and that inductive generalizations lack proper justification.
Validity Methodlogy:
Validity refers to how accurately a method measures what it is intended to
measure. If research has high validity that means it produces results that
correspond to real properties, characteristics, and variations in the physical or
social world. High reliability is one indicator that a measurement is valid.
Validity of data:
Data validation is the process of ensuring data has undergone data ... Data
validation is intended to provide certain well-defined guarantees for fitness
and consistency of data in an application or automated system.
Validity of Inference:
Inference validity refers to the validity of a research design as a whole. It
refers to whether you can trust the conclusions of a study. . Statistical
measures show relationships, but it is the theory and the study design that
affect what kinds of claims to causality you can reasonably make.
Validation vs verification:
Verification is a theoretical exercise designed to make sure that no
requirements are missed in the design, whereas validation is a practical
exercise that ensures that the product, as built, will function to meet the
requirements.