Module 3.1&3.2

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 55

Experimental and modelling

skills
Scientific method

• The Scientific method is a process with the help of which scientists try
to investigate, verify, or construct an accurate and reliable version of
any natural phenomena.
• They are done by creating an objective framework for the purpose of
scientific inquiry and analysing the results scientifically to come to a
conclusion that either supports or contradicts the observation made
at the beginning.
Scientific Method Steps

• Observation and formulation of a question: This is the first step of a scientific method. To start one,
an observation has to be made into any observable aspect or phenomena of the universe, and a
question needs to be asked about that aspect. For example, you can ask, “Why is the sky black at
night? or “Why is air invisible?”
• Data Collection and Hypothesis: The next step involved in the scientific method is to collect all
related data and formulate a hypothesis based on the observation. The hypothesis could be the
cause of the phenomena, its effect, or its relation to any other phenomena.
• Testing the hypothesis: After the hypothesis is made, it needs to be tested scientifically. Scientists do
this by conducting experiments. The aim of these experiments is to determine whether the
hypothesis agrees with or contradicts the observations made in the real world. The confidence in the
hypothesis increases or decreases based on the result of the experiments.
• Analysis and Conclusion: This step involves the use of proper mathematical and other scientific
procedures to determine the results of the experiment. Based on the analysis, the future course of
action can be determined. If the data found in the analysis is consistent with the hypothesis, it is
accepted. If not, then it is rejected or modified and analysed again.
1.Make an observation.
2.Ask a question.
3.Form a hypothesis, or testable explanation.
4.Make a prediction based on the hypothesis.
5.Test the prediction.
6.Iterate: use the results to make new hypotheses or predictions.
Scientific Method Examples

• Growing bean plants:


• What is the purpose: The main purpose of this experiment is to know where the bean plant should be
kept inside or outside to check the growth rate and also set the time frame as four weeks.
• Construction of hypothesis: The hypothesis used is that the bean plant can grow anywhere if the
scientific methods are used.
• Executing the hypothesis and collecting the data: Four bean plants are planted in identical pots using
the same soil. Two are placed inside, and the other two are placed outside. Parameters like the
amount of exposure to sunlight, and amount of water all are the same. After the completion of four
weeks, all four plant sizes are measured.
• Analyse the data: While analysing the data, the average height of plants should be taken into account
from both places to determine which environment is more suitable for growing the bean plants.
• Conclusion: The conclusion is drawn after analyzing the data.
• Results: Results can be reported in the form of a tabular form.
Role of hypothesis in experiment

• A hypothesis is an assumption that is made based on some evidence.


This is the initial point of any investigation that translates the research
questions into predictions.
• It includes components like variables, population and the relation
between the variables.
• A research hypothesis is a hypothesis that is used to test the
relationship between two or more variables.
Characteristics of Hypothesis

• The hypothesis should be clear and precise to consider it to be


reliable.
• If the hypothesis is a relational hypothesis, then it should be stating
the relationship between variables.
• The hypothesis must be specific and should have scope for conducting
more tests.
• The way of explanation of the hypothesis must be very simple and it
should also be understood that the simplicity of the hypothesis is not
related to its significance.
Sources of Hypothesis

• The resemblance between the phenomenon.


• Observations from past studies, present-day experiences and from
the competitors.
• Scientific theories.
• General patterns that influence the thinking process of people.
Types of Hypothesis

• Simple hypothesis
• Complex hypothesis
• Directional hypothesis
• Non-directional hypothesis
• Null hypothesis
• Associative and casual hypothesis
• Simple Hypothesis
• It shows a relationship between one dependent variable and a single
independent variable. For example – If you eat more vegetables, you will lose
weight faster. Here, eating more vegetables is an independent variable, while
losing weight is the dependent variable.
• Complex Hypothesis
• It shows the relationship between two or more dependent variables and two or
more independent variables. Eating more vegetables and fruits leads to weight
loss, glowing skin, and reduces the risk of many diseases such as heart disease.
• Directional Hypothesis
• It shows how a researcher is intellectual and committed to a particular
outcome. The relationship between the variables can also predict its nature. For
example- children aged four years eating proper food over a five-year period
are having higher IQ levels than children not having a proper meal. This shows
the effect and direction of the effect.
• Non-directional Hypothesis
• It is used when there is no theory involved. It is a statement that a relationship
exists between two variables, without predicting the exact nature (direction) of the
relationship.
• Null Hypothesis
• It provides a statement which is contrary to the hypothesis. It’s a negative
statement, and there is no relationship between independent and dependent
variables. The symbol is denoted by “HO”.
• Associative and Causal Hypothesis
• Associative hypothesis occurs when there is a change in one variable resulting in a
change in the other variable. Whereas, the causal hypothesis proposes a cause and
effect interaction between two or more variables.
Examples of Hypothesis

• Consumption of sugary drinks every day leads to obesity is an


example of a simple hypothesis.
• All lilies have the same number of petals is an example of a null
hypothesis.
• If a person gets 7 hours of sleep, then he will feel less fatigue than if
he sleeps less. It is an example of a directional hypothesis.
Functions of Hypothesis

• Hypothesis helps in making an observation and experiments possible.


• It becomes the start point for the investigation.
• Hypothesis helps in verifying the observations.
• It helps in directing the inquiries in the right direction.
How will Hypothesis help in the Scientific Method?

• Formation of question
• Doing background research
• Creation of hypothesis
• Designing an experiment
• Collection of data
• Result analysis
• Summarizing the experiment
• Communicating the results
Dependent and independent variables
• In research, variables are any characteristics that can take on different values, such as
height, age, temperature, or test scores.
• Researchers often manipulate or measure independent and dependent variables in
studies to test cause-and-effect relationships.
• The independent variable is the cause. Its value is independent of other variables in your
study.
• The dependent variable is the effect. Its value depends on changes in the independent
variable.
• Example: Independent and dependent variables -You design a study to test whether
changes in room temperature have an effect on math test scores.
• Your independent variable is the temperature of the room. You vary the room
temperature by making it cooler for half the participants, and warmer for the other half.
• Your dependent variable is math test scores. You measure the math skills of all
participants using a standardized test and check whether they differ based on room
temperature.
What is an independent variable?
• An independent variable is the variable you manipulate or vary in an
experimental study to explore its effects. It’s called “independent” because
it’s not influenced by any other variables in the study.
• Independent variables are also called:
• Explanatory variables (they explain an event or outcome)
• Predictor variables (they can be used to predict the value of a dependent
variable)
• Right-hand-side variables (they appear on the right-hand side of a
regression equation).
• These terms are especially used in statistics, where you estimate the
extent to which an independent variable change can explain or predict
changes in the dependent variable.
Types of independent variables

• There are two main types of independent variables.


• Experimental independent variables can be directly manipulated by
researchers.
• Subject variables cannot be manipulated by researchers, but they can
be used to group research subjects categorically.
• Experimental variables
• In experiments, you manipulate independent variables directly to see how they affect your
dependent variable. The independent variable is usually applied at different levels to see
how the outcomes differ.
• You can apply just two levels in order to find out if an independent variable has an effect at
all.
• You can also apply multiple levels to find out how the independent variable affects the
dependent variable.
• Example: Independent variable levels-You are studying the impact of a new medication on
the blood pressure of patients with hypertension. Your independent variable is the
treatment that you directly vary between groups.
• You have three independent variable levels, and each group gets a different level of
treatment.
• You randomly assign your patients to one of the three groups:
• A low-dose experimental group
• A high-dose experimental group
• A placebo group (to research a possible placebo effect)
• Subject variables
• Subject variables are characteristics that vary across participants, and
they can’t be manipulated by researchers.
• For example, gender identity, ethnicity, race, income, and education are
all important subject variables that social researchers treat as
independent variables.
• It’s not possible to randomly assign these to participants, since these are
characteristics of already existing groups.
• Instead, you can create a research design where you compare the
outcomes of groups of participants with characteristics.
• This is a quasi-experimental design because there’s no random
assignment. Note that any research methods that use non-random
assignment are at risk for research biases like selection bias and
sampling bias.
• Example: Quasi-experimental design-You study whether gender
identity affects neural responses to infant cries.
• Your independent variable is a subject variable, namely the gender
identity of the participants. You have three groups: men, women and
other.
• Your dependent variable is the brain activity response to hearing
infant cries. You record brain activity with fMRI scans when
participants hear infant cries without their awareness.
• After collecting data, you check for statistically significant differences
between the groups.
• You find some and conclude that gender identity influences brain
responses to infant cries.
What is a dependent variable?
• A dependent variable is the variable that changes as a result of the
independent variable manipulation. It’s the outcome you’re
interested in measuring, and it “depends” on your independent
variable.
• In statistics, dependent variables are also called:
• Response variables (they respond to a change in another variable)
• Outcome variables (they represent the outcome you want to
measure)
• Left-hand-side variables (they appear on the left-hand side of a
regression equation)
• The dependent variable is what you record after you’ve manipulated
the independent variable. You use this measurement data to check
whether and to what extent your independent variable influences the
dependent variable by conducting statistical analyses.
• Based on your findings, you can estimate the degree to which your
independent variable variation drives changes in your dependent
variable. You can also predict how much your dependent variable will
change as a result of variation in the independent variable.
Control in experiment
• An experimental control is used in scientific experiments to minimize the
effect of variables which are not the interest of the study. The control can
be an object, population, or any other variable which a scientist would like
to “control.”
• The function of an experimental control is to hold constant the variables
that an experimenter isn’t interested in measuring.
• This helps scientists ensure that there have been no deviations in the
environment of the experiment that could end up influencing the outcome
of the experiment, besides the variable they are investigating.
• A control is important for an experiment because it allows the experiment
to minimize the changes in all other variables except the one being tested.
Control Groups and Experimental Groups
• There will frequently be two groups under observation in an
experiment, the experimental group, and the control group.
• The control group is used to establish a baseline that the behavior of
the experimental group can be compared to.
• If two groups of people were receiving an experimental treatment for
a medical condition, one would be given the actual treatment (the
experimental group) and one would typically be given a placebo or
sugar pill (the control group).
• Without an experimental control group, it is difficult to determine the effects
of the independent variable on the dependent variable in an experiment.
• This is because there can always be outside factors that are influencing the
behavior of the experimental group.
• The function of a control group is to act as a point of comparison, by
attempting to ensure that the variable under examination (the impact of the
medicine) is the thing responsible for creating the results of an experiment.
• The control group is holding other possible variables constant, such as the act
of seeing a doctor and taking a pill, so only the medicine itself is being tested.
Why Are Experimental Controls So
Important?
• Experimental controls allow scientists to
eliminate varying amounts of uncertainty in their experiments.
• Whenever a researcher does an experiment and wants to ensure that
only the variable they are interested in changing is changing, they
need to utilize experimental controls.
• Experimental controls have been dubbed “controls” precisely because
they allow researchers to control the variables they think might have
an impact on the results of the study.
• If a researcher believes that some outside variables could influence
the results of their research, they’ll use a control group to try and
hold that thing constant and measure any possible influence it has on
the results.
• It is important to note that there may be many different controls for
an experiment, and the more complex a phenomenon under
investigation is, the more controls it is likely to have.
• Not only do controls establish a baseline that the results of an
experiment can be compared to, they also allow researchers to
correct for possible errors.
• If something goes wrong in the experiment, a scientist can check on
the controls of the experiment to see if the error had to do with the
controls. If so, they can correct this next time the experiment is done.
precision and accuracy
• Accuracy
• The ability of an instrument to measure the accurate value is known as accuracy. In other words, it is the the
closeness of the measured value to a standard or true value. Accuracy is obtained by taking small readings.
The small reading reduces the error of the calculation. The accuracy of the system is classified into three
types as follows:
• Point Accuracy
• The accuracy of the instrument only at a particular point on its scale is known as point accuracy. It is
important to note that this accuracy does not give any information about the general accuracy of the
instrument.
• Accuracy as Percentage of Scale Range
• The uniform scale range determines the accuracy of a measurement. This can be better understood with the
help of the following example:
Consider a thermometer having the scale range up to 500 ºC. The thermometer has an accuracy of ±0.5
percent of scale range i.e. 0.005 x 500 = ± 2.5 ºC. Therefore, the reading will have a maximum error of ± 2.5
ºC.
• Accuracy as Percentage of True Value
• Such type of accuracy of the instruments is determined by identifying the measured value regarding their true
value. The accuracy of the instruments is neglected up to ±0.5 percent from the true value.
• Precision
• The closeness of two or more measurements to each other is known as the precision of a substance. If you
weigh a given substance five times and get 3.2 kg each time, then your measurement is very precise but not
necessarily accurate. Precision is independent of accuracy. The below examples will tell you about how you
can be precise but not accurate and vice versa. Precision is sometimes separated into:
• Repeatability
• The variation arising when the conditions are kept identical and repeated measurements are taken during a
short time period.
• Reproducibility
• The variation arises using the same measurement process among different instruments and operators, and
over longer time periods.
• Conclusion
• Accuracy is the degree of closeness between a measurement and its true value. Precision is the degree to
which repeated measurements under the same conditions show the same results.
Accuracy and Precision Examples

The top left image shows the target hit at high precision and
accuracy. The top right image shows the target hit at a high
accuracy but low precision. The bottom left image shows the
target hit at a high precision but low accuracy. The bottom
right image shows the target hit at low accuracy and low
precision.
More Examples

• If the weather temperature reads 28 °C outside and it is 28 °C outside, then the measurement is said to be
accurate. If the thermometer continuously registers the same temperature for several days, the measurement is
also precise.
• If you take the measurement of the mass of a body of 20 kg and you get 17.4,17,17.3 and 17.1, your weighing
scale is precise but not very accurate. If your scale gives you values of 19.8, 20.5, 21.0, and 19.6, it is more
accurate than the first balance but not very precise.
Difference between Accuracy and Precision

Accuracy Precision

Accuracy refers to the level Precision implies the level


of agreement between the of variation that lies in the
actual measurement and values of several
the absolute measurement. measurements of the same
factor.

Represents how closely the Represents how closely


results agree with the results agree with one
standard value. another.

Single-factor or Multiple measurements or


measurement are needed. factors are needed to
comment about precision.

It is possible for a Results can be precise


measurement to be without being accurate.
accurate on occasion as a Alternatively, the results
fluke. For a measurement can be precise and
to be consistently accurate, accurate.
it should also be precise.
Random Error
• The difference between actual values and observed values is known as an error.
• Random errors are those kinds of errors which are irregular and thus are random in nature.
• These errors shift each measurement from its actual value by some random amount as well as in a random
direction.
• It is the fluctuating part of the error that actually varies from measurement to measurement.
• Sometimes, the random error is also referred to as the deviation of the total error from its mean value.
• Random error happens because of disturbances occurring in the surroundings. These can be changes in
temperature, pressure or also due to an observer’s misreading who takes the wrong reading.
• The complete elimination of any kind of error is nearly impossible.
• When an experiment is performed many times. You will observe that random errors are sometimes positive
and sometimes negative. Thus, the average value of a large number of the results of repeated experiments is
very much close to the actual value. Yet, there is still some uncertainty about the truth of this value.
• Thus, if one wishes to be more sure of the results, one can use intervals which contain the actual value along
with estimated deviation. This can be mathematically expressed as,


• Here, x is the average value of many experimental trials and is the deviation that defines the order of
uncertainty.
Random Error Examples

• Mass measurements on an analytical balance vary with the flow of air and even little mass variations in the
sample.
• Weight measurements on a weighing scale fluctuate because it’s near to impossible to stand on the scale very
same way each time. Averaging the result using multiple measurements minimises the error.
• Posture changes influence height measurements. Reaction speed affects timing estimation.
• Slight change in observing angle affects volume measurements.
Sources of Random Error

• Instruments limitations
• Environmental factors like variations in pressure and temperature
• Due to mishandling or wrong reading by observers
How to Reduce Random Error

• Random errors can be reduced using the following methods:


• By increasing sample size
• By repeating the experiments
Types of Random Error

• Environmental Errors: Errors that occur due to any unpredictable change in the environment
• Observational Errors: These types of errors generally occur due to any mishandling or judgment made by
the observer.
statistical treatment of data
• Statistical treatment’ is when you apply a statistical method to a data set to draw meaning from it.
• Statistical treatment can be either descriptive statistics, which describes the relationship between
variables in a population, or inferential statistics, which tests a hypothesis by making inferences
from the collected data.
What is Statistical Treatment of Data?

• Statistical treatment of data is when you apply some form of statistical method to a data set to
transform it from a group of meaningless numbers into meaningful output.
• Statistical treatment of data involves the use of statistical methods such as:
• mean,
• mode,
• median,
• regression,
• conditional probability,
• sampling,
• standard deviation and
• distribution range.
• These statistical methods allow us to investigate the statistical relationships between the data and
identify possible errors in the study.
• In addition to being able to identify trends, statistical treatment also allows us to organise and
process our data in the first place. This is because when carrying out statistical analysis of our
data, it is generally more useful to draw several conclusions for each subgroup within our
population than to draw a single, more general conclusion for the whole population. However, to
do this, we need to be able to classify the population into different subgroups so that we can later
break down our data in the same way before analysing it.

• For a statistical treatment of data example, consider a


medical study that is investigating the effect of a drug on the
human population.
• As the drug can affect different people in different ways
based on parameters such as gender, age and race, the
researchers would want to group the data into different
subgroups based on these parameters to determine how each
one affects the effectiveness of the drug.
• Categorising the data in this way is an example of
performing basic statistical treatment.
statistical treatment of data
systematic errors
• Systematic error as the name implies is a consistent or reoccurring error that is caused by incorrect
use or generally bad experimental equipment. With systematic error, you can expect the result of
each experiment to differ from the value in the original data.
• This is also known as systematic bias because the errors will hide the correct result, thus leading
the researcher to wrong conclusions
Types of Systematic Errors

• 1. Offset Error
• Before starting your experiment, your scale should be set to zero points. The offset error occurs
when the measurement scale is not set to zero points before weighing your items.
• 2. Scale Factor Error
• This is also known as multiple errors.
• The scale factor error results from changes in the value or size of your scale that differs from its
actual size.
• Let us consider a scenario whereby your scale repeatedly adds an extra 5% to your measurements.
So when you’re measuring a value of 10kg, your scale shows a result of 10.5kg.
• The implication is that, because the scale is not reading at its original value which should be zero,
for every stretch, your measurement will also be read incorrectly. If the scale increases by 1%, your
reading will also increase by 1%. What scale factor error does is that through percentage or
proportion, it adds or deduct from the original value.
• One thing to note is that systematic error is always consistent. If it brings a value of say 70g at the
first reading, and you decide to conduct the measurement again, it will still give you the same
reading as before.
Causes of Systematic Errors in Research

• Researcher’s Error
• When a researcher is ignorant, has a physical challenge that can cause an effect on a study, or is
just careless, it can alter the outcome of the research. Preventing any of the above-listed traits as a
researcher can immensely reduce the likelihood of making errors in your research.
• Instrument Error
• Systematic error can happen if your equipment is faulty. The imperfection of your experiment
equipment can alter your study and ultimately, its findings.
• Analysis method Error
• As a researcher, if you do not plan how you’ll control your experiment in advance, your research
is at risk of being inaccurate. So to reduce the risk of error in your research, try as much as
possible to limit your independent variables to only one. The lesser your variables in an analysis,
the more chance you have to error-free research.
Effects of Systematic Error in Research

• The effect of a systematic error in research is that it will move the value of your measurements
away from their original value by the same percentage or the same proportion while in the same
direction.
• The consequence is that shifting the measurement does not affect your reliability. This is because
irrespective of how many times you repeat the measurement, you will get the same value.
However, the effect is on the accuracy of your result. If you’re not careful enough to notice the
inaccuracy, you might draw the wrong conclusion or even apply the wrong solutions.
• Example one:
• Let’s assume some researchers are carrying out a study on weight loss. At the end of the research,
the researchers realized the scale added 15pounds to each of the sample data, they then concluded
that their finding is inaccurate because the scale used gave a wrong reading. Now, this is an
example of a systematic error, because the error, although consistent, is inaccurate. If the
researchers did not realize the disparity, they would have made a wrong conclusion.
• This example shows how systematic error can occur in research because of faulty instruments.
Therefore, frequent calibration is advised before conducting a test.
• Example two:
• When measuring the temperature of a room, if your thermometer and the room you’re measuring
are in poor contact, you will get an inaccurate reading. If you repeat the test and your thermometer
still has low thermal contact with the room, you will get constant results even though inaccurate.
Here, the thermometer is not faulty, the cause of the error is from the researcher’s wrongful
handling.
How can we eliminate systematic error?

• Triangulation: This is the method of using over one technique to document your research
observations. That way you don’t rely on one piece of equipment or technique. When you’re done
with your testing, you can easily compare the findings from your multiple techniques and see
whether they match or they don’t.
• Frequent calibration: This means that you compare the findings from your test to the
standard value or theoretical result. Doing this regularly with a standard result to cross-check
can reduce the chance of systematic error in your research.
• When you’re conducting research, make sure you do routine checks. If you’re wondering how
often you should perform calibration, note that this generally depends on your equipment.
• Randomization: Using randomization in your study can reduce the risk of systematic error
because when you’re testing your data, you can randomly group your data sample into a
relevant treatment group. That will even the sample size across their groups.

You might also like