Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 47

UNIT 1

What is Research?
Definition: Research is defined as careful consideration of study regarding a
particular concern or problem using scientific methods. According to the American
sociologist Earl Robert Babbie, “research is a systematic inquiry to describe,
explain, predict, and control the observed phenomenon. It involves inductive and
deductive methods.”

Types of Research
1. Quantitative Research
As the name suggests, quantitative refers to the numbers where data is collected
based on numbers, and a summary is taken from these numbers. Graphs help to
quantify the results in quantitative research.

2. Qualitative Research
Qualitative refers to the non- numerical elements in the research. When the
information or data cannot be grasped in terms of numbers, qualitative research
comes for the rescue. Though not reliable as much as quantitative, qualitative
research helps to form a better summary in terms of theories in the data.

Based on the nature of the research,

3. Descriptive Research
Facts are considered in descriptive methods and surveys and case studies are done
to clarify the facts. These help to determine and explain with examples, the facts,
and they are not rejected. Many variables can be used in descriptive research to
explain the facts.

4. Analytical Research
Analytical research uses the facts that have been confirmed already to form the
basis for the research and critical evaluation of the material is carried out in this
method. Analytical methods make use of quantitative methods as well.

5. Applied Research
Applied research is action research where only one domain is considered and
mostly the facts are generalized. Variables are considered constant and forecasting
is done so that the methods can be found out easily in applied research. The
technical language is used in the research and the summary is based on technical
facts.

6. Fundamental Research
Fundamental research is the basic or pure research done to find out an element or a
theory that has never been in the world yet. Several domains are connected and the
aim is to find out how traditional things can be changed or something new can be
developed. The summary is purely in common language and logical findings are
applied in the research.
Based on research design,

7. Exploratory Research
Exploratory studies are based on the theories and their explanation and it does not
provide any conclusion for the research topic. The structure is not proper and the
methods offer a flexible and investigative approach for the study. The hypothesis is
not tested and the result will not be of much help to the outside world. The findings
will be topic related that helps in improving the research more.

8. Conclusive Research
Conclusive Research aims at providing an answer to the research topic and has a
proper design in the methodology. A well-designed structure helps in formulating
and solving the hypotheses and give the results. The results will be generic and
help the outside world. Researchers will have an inner pleasure to solve the
problems and to help society in general.

9. Surveys
Not least considered, but Surveys play a main role in the research methodology. It
helps to collect a vast amount of real-time data and helps in the research process. It
is done at a low cost and can be done faster than any other method. Surveys can be
done in both quantitative and qualitative methods. Always, quantitative surveys
must be considered above qualitative surveys as they provide numerical outputs
and the data is real. Surveys are mainly used in the business to know the demand
for a product in the market and to forecast the production based on the results from
the survey.

10. Case Studies


Case studies are another method of research methodology where different cases are
considered and the proper one for the research is selected. Case studies help to
form an idea of the research and helps in the foundation of the research. Various
facts and theories can be considered from the case studies that help to form proper
reviews about the research topic. Researchers can either make the topic general or
specific according to the literature reviews from the studies. A proper
understanding of the research can be made from the case study.

Steps in Research process


1. Selecting the research area. Your dissertation marker expects
you to state that you have selected the research area due to
professional and personal interests in the area and this statement
must be true. Students often underestimate the importance of this
first stage in the research process. If you find a research area and
research problem that is genuinely interesting to you it is for sure
that the whole process of writing your dissertation will be much
easier. Therefore, it is never too early to start thinking about the
research area for your dissertation.

2. Formulating research aim, objectives and research questions or


developing hypotheses. The choice between the formulation of
research questions and the development of hypotheses depends on
your research approach as it is discussed further below in more
details. Appropriate research aims and objectives or hypotheses
usually result from several attempts and revisions.

Accordingly, you need to mention in your dissertation that you have


revised your research aims and objectives or hypotheses during the
research process several times to get their final versions. It is
critically important that you get confirmation from your supervisor
regarding your research questions or hypotheses before moving
forward with the work.
3. Conducting the literature review. Literature review is usually the
longest stage in the research process. Actually, the literature
review starts even before the formulation of research aims and
objective. This is because you have to check if exactly the same
research problem has been addressed before and this task is a part
of the literature review. Nevertheless, you will conduct the main
part of the literature review after the formulation of research aim
and objectives. You have to use a wide range of secondary data
sources such as books, newspapers, magazines, journals, online
articles etc.

4. Selecting data collection methods. Data collection


method(s) need to be selected on the basis of critically analyzing
advantages and disadvantages associated with several alternative
methods. In studies involving primary data collection, you need to
write about advantages and disadvantages of selected primary data
collection method(s) in detailed manner in methodology.

5. Collecting the primary data. You will have to start primary data
collection only after detailed preparation. Sampling is an important
element of this stage. You may have to conduct pilot data collection
if you chose questionnaire primary data collection method. Primary
data collection is not a compulsory stage for all dissertations and
you will skip this stage if you are conducting a desk-based research.

6. Data analysis. Analysis of data plays an important role in the


achievement of research aim and objectives. This stage involves an
extensive editing and coding of data. Data analysis methods vary
between secondary and primary studies, as well as, between
qualitative and quantitative studies. In data analysis coding of
primary data plays an instrumental role to reduce sample group
responses to a more manageable form for storage and future
processing. Data analysis is discussed in Chapter 6 in great details.

7. Reaching conclusions. Conclusions relate to the level of


achievement of research aims and objectives. In this final part of
your dissertation you will have to justify why you think that
research aims and objectives have been achieved. Conclusions also
need to cover research limitations and suggestions for future
research.

8. Completing the research. Following all of the stages described


above, and organizing separate chapters into one file leads to the
completion of the first draft. You need to prepare the first draft of
your dissertation at least one month before the submission
deadline. This is because you will need to have sufficient amount of
time to address feedback to be provided by your supervisor.

Survey Method
The essence of survey method can be explained as “questioning individuals
on a topic or topics and then describing their responses”[1]. In business
studies survey method of primary data collection is used in order to test
concepts, reflect attitude of people, establish the level of customer
satisfaction, conduct segmentation research and a set of other purposes.
Survey method can be used in both, quantitative, as well
as, qualitative studies.

Survey method pursues two main purposes:

1. Describing certain aspects or characteristics of population and/or


2. Testing hypotheses about nature of relationships within a population.

Survey method can be broadly divided into three categories: mail survey,
telephone survey and personal interview. The descriptions of each of these
methods are briefly explained on the following table [2]:
Survey method

Mail survey-A written survey that is self


administered

Telephone survey-A survey conducted by


telephone in which the questions are
read to the respondents

Personal interview-A face-to-face


interview of the respondent

Advantages of Survey Method


1. Surveys can be conducted faster and cheaper compared to other
methods of primary data collection such as observation and
experiments
2. Primary data gathered through surveys are relatively easy to analyse

Disadvantages of Survey Method


1. In some cases, unwillingness or inability of respondents to provide
information
2. Human bias of respondents, i.e. respondents providing inaccurate
information
3. Differences in understanding: it is difficult to formulate questions in
such a way that it will mean exactly same thing to each respondent

Mathematical tools

Reading materials
UNIT 2
Statistics is a branch of Mathematics, that deals with the collection, analysis, interpretation, and the
presentation of the numerical data. In other words, it is defined as the collection of quantitative data. The
main purpose of Statistics is to make an accurate conclusion using a limited sample about a greater
population.

What is Statistical Modeling?


Statistical modeling refers to the data science process of applying statistical analysis
to datasets. A statistical model is a mathematical relationship between one or more
random variables and other non-random variables. The application of statistical
modeling to raw data helps data scientists approach data analysis in a strategic
manner, providing intuitive visualizations that aid in identifying relationships between
variables and making predictions.
Common data sets for statistical analysis include Internet of Things (IoT) sensors,
census data, public health data, social media data, imagery data, and other public
sector data that benefit from real-world predictions.

Statistical Modeling Techniques


The first step in developing a statistical model is gathering data, which may be
sourced from spreadsheets, databases, data lakes, or the cloud. The most common
statistical modeling methods for analyzing this data are categorized as either
supervised learning or unsupervised learning. Some popular statistical model
examples include logistic regression, time-series, clustering, and decision trees.
Supervised learning techniques include regression models and classification models:

 Regression model: a type of predictive statistical model that analyzes the


relationship between a dependent and an independent variable. Common
regression models include logistic, polynomial, and linear regression models.
Use cases include forecasting, time series modeling, and discovering the
causal effect relationship between variables.
 Classification model: a type of machine learning in which an algorithm
analyzes an existing, large and complex set of known data points as a means
of understanding and then appropriately classifying the data; common models
include models include decision trees, Naive Bayes, nearest neighbor,
random forests, and neural networking models, which are typically used in
Artificial Intelligence.

Unsupervised learning techniques include clustering algorithms and association


rules:
 K-means clustering: aggregates a specified number of data points into a
specific number of groupings based on certain similarities.
 Reinforcement learning: an area of deep learning that concerns models
iterating over many attempts, rewarding moves that produce favorable
outcomes and penalizing steps that produce undesired outcomes, therefore
training the algorithm to learn the optimal process.

There are three main types of statistical models: parametric, nonparametric, and
semiparametric:

 Parametric: a family of probability distributions that has a finite number of


parameters.
 Nonparametric: models in which the number and nature of the parameters are
flexible and not fixed in advance.‍
 Semiparametric: the parameter has both a finite-dimensional component
(parametric) and an infinite-dimensional component (nonparametric).

Statistical analysis defined


Statistics (or statistical analysis) is the process of collecting and analyzing data to
identify patterns and trends. It's a method of using numbers to try to remove any bias
when reviewing information. It can also be thought of as a scientific tool that can
inform decision making.

The online technology firm TechTarget.com describes statistical analysis as an


aspect of business intelligence that involves the collection and scrutiny of business
data and the reporting of trends.

"Statistical analysis examines every single data sample in a population (the set of
items from which samples can be drawn), rather than a cross sectional
representation of samples as less sophisticated methods do,"

 Describe the nature of the data to be analyzed.


 Explore the relation of the data to the underlying population.
 Create a model to summarize understanding of how the data relates to the
underlying population.
 Prove (or disprove) the validity of the model.
 Employ predictive analytics to anticipate future trends.

Types of statistical analysis


Descriptive statistics
"Descriptive statistics intend to describe a big hunk of data with summary charts and tables,
but do not attempt to draw conclusions about the population from which the sample was
taken," the company writes on its website. "You are simply summarizing the data you have
with pretty charts and graphs — kind of like telling someone the key points of a book
(executive summary) as opposed to just handing them a thick book (raw data)."

"Descriptive statistics therefore enables us to present the data in a more meaningful way,
which allows simpler interpretation of the data," Laerd writes on its website.

Statistical interference

The second type of statistical analysis is inference. Inferential statistics are a way to
study the data even further.

According to My Market Research, inference statistics allows organizations to test a


hypothesis and draw conclusions about the data. In these cases, a sample of the
entire data is typically examined, with the results applied to the group as a whole.

Benefits of statistical analysis


Is it really worth investing in big data and statistical analysis? The best way to
answer that question is to explore the benefits. In general, statistics will help to
identify trends that escape notice without these methods. The analysis also injects
objectivity into decision-making. With good statistics, gut decisions are not
necessary.

To be more specific, statistical analysis has proven itself in many cases. Twiddy &
Company Realtors is a firm that used statistics to cut their operating costs by 15%.
The analysis found wasteful spending and helped to eliminate it.

Similar stories show data helping with market analysis. The statistics show where the
most sales happen, where the sales have the most value and what marketing is
attached to those sales. It allows for improved efficiency in every aspect of sales and
marketing.

Likewise, statistical analysis can help with work efficiency. In many cases, providing
the right tools will get the best work out of employees. Statistical analysis will allow
employers to carefully scrutinize the effectiveness of each tool and focus on those
with the best performance.

Statistical analysis software


Among some of the more popular statistical analysis software services are IBM's SPSS,
SAS, Revolution Analytics' R, Minitab and Stata.

What is time series analysis?


Time series analysis is a specific way of analyzing a sequence of data points collected over an
interval of time. In time series analysis, analysts record data points at consistent intervals over
a set period of time rather than just recording the data points intermittently or randomly.
However, this type of analysis is not merely the act of collecting data over time.

What sets time series data apart from other data is that the analysis can show how variables
change over time. In other words, time is a crucial variable because it shows how the data
adjusts over the course of the data points as well as the final results. It provides an additional
source of information and a set order of dependencies between the data.

Time series analysis typically requires a large number of data points to ensure consistency
and reliability. An extensive data set ensures you have a representative sample size and that
analysis can cut through noisy data. It also ensures that any trends or patterns discovered are
not outliers and can account for seasonal variance. Additionally, time series data can be used
for forecasting—predicting future data based on historical data.

Why organizations use time


series data analysis
Time series analysis helps organizations understand the underlying causes of trends or
systemic patterns over time. Using data visualizations, business users can see seasonal trends
and dig deeper into why these trends occur. With modern analytics platforms, these
visualizations can go far beyond line graphs.

When organizations analyze data over consistent intervals, they can also use time series
forecasting to predict the likelihood of future events. Time series forecasting is part
of predictive analytics. It can show likely changes in the data, like seasonality or cyclic
behavior, which provides a better understanding of data variables and helps forecast better.

For example, Des Moines Public Schools analyzed five years of student achievement data to
identify at-risk students and track progress over time. Today’s technology allows us to collect
massive amounts of data every day and it’s easier than ever to gather enough consistent data
for comprehensive analysis.

Time series analysis


examples
 Weather data
 Rainfall measurements
 Temperature readings
 Heart rate monitoring (EKG)
 Brain monitoring (EEG)
 Quarterly sales
 Stock prices
 Automated stock trading
 Industry forecasts
 Interest rates

Time Series Analysis Types


 Classification: Identifies and assigns categories to the data.
 Curve fitting: Plots the data along a curve to study the relationships of variables
within the data.
 Descriptive analysis: Identifies patterns in time series data, like trends, cycles, or
seasonal variation.
 Explanative analysis: Attempts to understand the data and the relationships within it,
as well as cause and effect.
 Exploratory analysis: Highlights the main characteristics of the time series data,
usually in a visual format.
 Forecasting: Predicts future data. This type is based on historical trends. It uses the
historical data as a model for future data, predicting scenarios that could happen along
future plot points.
 Intervention analysis: Studies how an event can change the data.
 Segmentation: Splits the data into segments to show the underlying properties of the
source information.

Data classification
Further, time series data can be classified into two main categories:

 Stock time series data means measuring attributes at a certain point in time, like a static
snapshot of the information as it was.
 Flow time series data means measuring the activity of the attributes over a certain period,
which is generally part of the total whole and makes up a portion of the results.

Data variations
In time series data, variations can occur sporadically throughout the data:

 Functional analysis can pick out the patterns and relationships within the data to identify
notable events.
 Trend analysis means determining consistent movement in a certain direction. There are two
types of trends: deterministic, where we can find the underlying cause, and stochastic, which
is random and unexplainable.
 Seasonal variation describes events that occur at specific and regular intervals during the
course of a year. Serial dependence occurs when data points close together in time tend to be
related.

Time Series Analysis Models


and Techniques
 Box-Jenkins ARIMA models: These univariate models are used to better understand
a single time-dependent variable, such as temperature over time, and to predict future
data points of variables. These models work on the assumption that the data is
stationary. Analysts have to account for and remove as many differences and
seasonalities in past data points as they can. Thankfully, the ARIMA model includes
terms to account for moving averages, seasonal difference operators, and
autoregressive terms within the model.
 Box-Jenkins Multivariate Models: Multivariate models are used to analyze more
than one time-dependent variable, such as temperature and humidity, over time.
 Holt-Winters Method: The Holt-Winters method is an exponential smoothing
technique. It is designed to predict outcomes, provided that the data points include
seasonality.

Statistical Inference Definition


Statistical inference is the process of analysing the result and making conclusions from data subject to
random variation. It is also called inferential statistics. Hypothesis testing and confidence intervals are the
applications of the statistical inference. Statistical inference is a method of making decisions about the
parameters of a population, based on random sampling. It helps to assess the relationship between the
dependent and independent variables. The purpose of statistical inference to estimate the uncertainty or
sample to sample variation. It allows us to provide a probable range of values for the true values of
something in the population. The components used for making statistical inference are:

 Sample Size
 Variability in the sample
 Size of the observed differences

Types of Statistical Inference


There are different types of statistical inferences that are extensively used for making conclusions. They
are:

 One sample hypothesis testing


 Confidence Interval
 Pearson Correlation
 Bi-variate regression
 Multi-variate regression
 Chi-square statistics and contingency table
 ANOVA or T-test
Applications of Statistical Inference
 Business Analysis
 Artificial Intelligence
 Financial Analysis
 Fraud Detection
 Machine Learning
 Share Market
 Pharmaceutical Sector

Statistical Inference Examples


An example of statistical inference is given below.
Question: From the shuffled pack of cards, a card is drawn. This trial is repeated for 400 times, and the
suits are given below:

Suit Spade Clubs Hearts Diamonds

No.of times drawn 90 100 120 90

While a card is tried at random, then what is the probability of getting a

1. Diamond cards
2. Black cards
3. Except for spade

Solution:
By statistical inference solution,
Total number of events = 400
i.e.,90+100+120+90=400
(1) The probability of getting diamond cards:
Number of trials in which diamond card is drawn = 90
Therefore, P(diamond card) = 90/400 = 0.225
(2) The probability of getting black cards:
Number of trials in which black card showed up = 90+100 =190
Therefore, P(black card) = 190/400 = 0.475
(3) Except for spade
Number of trials other than spade showed up = 90+100+120 =310
Therefore, P(except spade) = 310/400 = 0.775

Statistical Inference Procedure


The procedure involved in inferential statistics are:
 Begin with a theory
 Create a research hypothesis
 Operationalize the variables
 Recognize the population to which the study results should apply
 Formulate a null hypothesis for this population
 Accumulate a sample from the population and continue the study
 Conduct statistical tests to see if the collected sample properties are adequately different from
what would be expected under the null hypothesis to be able to reject the null hypothesis

Statistical Inference Solution


Statistical inference solutions produce efficient use of statistical data relating to groups of individuals or
trials. It deals with all characters, including the collection, investigation and analysis of data and organizing
the collected data. By statistical inference solution, people can acquire knowledge after starting their work
in diverse fields. Some statistical inference solution facts are:

 It is a common way to assume that the observed sample is of independent observations from a
population type like Poisson or normal
 Statistical inference solution is used to evaluate the parameter(s) of the expected model like
normal mean or binomial proportion

Multivariate Analysis Methods


Multivariate analysis methods are used in a variety of areas:

 Linguistics, Natural Sciences and Humanities


 Economics, insurance and financial services
 Data mining, big data and relational databases

Types of multivariate analysis methods


 Factor analysis: Reduces the structure to relevant data and individual variables. Factor
studies focus on different variables, so they are further subdivided into main
component analysis and correspondence analysis. For example: Which website
elements have the greatest influence on purchasing behavior?
 Cluster analysis: Observations are graphically assigned to individual variable groups
and classified on the basis of these. The results are clusters and segments, such as the
number of buyers of a particular product, who are between 35 and 47 years old and
have a high income.

Structural review procedures include, among others, the:

 Regression Analysis: Investigates the influence of two types of variables on each other.
Dependent and nondependent variables are spoken of. The former are so-called explanatory
variables, while the latter are explanatory variables. The first describes the actual state on the
basis of data, the second explains this data by means of dependency relationships between the
two variables. In practice, several changes of web page elements correspond to independent
variables, while the effects on the conversion rate would be the dependent variable.
 Variance analysis: Determines the influence of several or individual variables on groups by
calculating statistical averages. Here you can compare variables within a group as well as
different groups, depending on where deviations are to be assumed. For example: Which
groups most often click on the' Buy Now' button in your shopping cart?
 Discriminant analysis: Used in the context of variance analysis to differentiate between
groups that can be described by similar or identical characteristics. For example, by which
variables do different groups of buyers differ?

Examples
A multivariate test of a web page can be presented in the following simplified way. Elements
such as headlines, teasers, images, but also buttons, icons or background colors have different
effects on user behavior. Different variants of elements are tested. The test would initially
identify these elements and show different users differently designed elements. The aim
would be to obtain data on the effects of the changes in terms of conversion rate or other
factors such as retention time, bounce rate or scrolling behavior compared to other sets of
elements.

Significance for usability


As a quantitative method, multivariate analysis is one of the most effective methods of testing
usability. At the same time, it is very complex and sometimes cost-intensive. Software can be
used to help, but the tests as such are considerably more complex than A/B tests in terms of
study design. The decisive advantage lies in the number of variables that can be considered
and their weighting as a measure of the significance of certain variables.

Even four different versions of an article's headline can result in completely different click
rates. The same applies to the design of buttons or the background color of the order form. In
individual cases, it is therefore worth considering from a multivariate perspective also
financially, especially for commercially oriented websites, such as online shops or websites,
which are to be amortized through advertising.

Distinguish Between Correlation and Regression.


Definition of Correlation -
The term correlation is a combination of two words 'Co' (together) and the
relation between two quantities. Correlation is when it is observed that a
change in a unit in one variable is retaliated by an equivalent change in
another variable, i.e., direct or indirect, at the time of study of two
variables. Or else the variables are said to be uncorrelated when the
motion in one variable does not amount to any movement in a specific
direction in another variable. It is a statistical technique that represents
the strength of the linkage between variable pairs.
Correlation can be either negative or positive. If the two variables move in
the same direction, i.e. an increase in one variable results in the
corresponding increase in another variable, and vice versa, then the
variables are considered to be positively correlated. For example,
Investment and profit.
On the contrary, if the two variables move in different directions so that
an increase in one variable leads to a decline in another variable and vice
versa, this situation is known as a negative correlation. For example,
Product price and demand.

Definition of Regression-
A statistical technique based on the average mathematical relationship
between two or more variables is known as regression, to estimate the
change in the metric dependent variable due to the change in one or
more independent variables. It plays an important role in many human
activities since it is a powerful and flexible tool that is used to forecast
past, present, or future events based on past or present events. For
example, The future profit of a business can be estimated on the basis of
past records.
There are two variables x and y in a simple linear regression, wherein y
depends on x or say that is influenced by x. Here y is called as a variable
dependent, or criterion, and x is a variable independent or predictor. The
line of regression y on x is expressed as below:
Y = a + bx
where, a = constant,
b = regression coefficient,
The a and b are the two regression parameters in this equation.
Difference between Correlation and Regression
BASIS FOR CORRELATION REGRESSION
COMPARISON

Meaning Correlation is a statistical Regression describes how


measure that determines to numerically relate an
the association or co- independent variable to
relationship between two the dependent variable.
variables.

Usage To represent a linear To fit the best line and to


relationship between two estimate one variable
variables. based on another.

Dependent and No difference Both variables are


Independent different.
variables
Indicates Correlation coefficient Regression indicates the
indicates the extent to impact of a change of unit
which two variables move on the estimated variable
together. ( y) in the known variable
(x).

Objective To find a numerical value To estimate values of


expressing the relationship random variables on the
between variables. basis of the values of
fixed variables.

Correlation and Regression Difference - They are not the Same Thing
Here’s the difference between correlation and regression analysis. To sum
up, there are four key aspects that differ from those terms.
There is a relationship between the variables when it comes to correlation.
In contrast, regression places emphasis on how one variable affects the
other.
Correlation does not capture causality whilst it is based on regression.
The correlation between x and y is identical to that between y and x.
Contrary to this, a regression of x and y, and y and x, results completely
differently.
Finally, one single point is a graphical representation of a correlation.
Whereas one line visualizes a linear regression.

Bottom Line on Difference Between Correlation and Regression


Analysis
Correlation and regression are two analyzes, based on multiple variables
distribution. They can be used to describe the nature of the relationship
and strength between two continuous quantitative variables.
It is evident with the above discussion that there is a big difference
between correlation and regression, the two mathematical concepts
although these two are being studied together. Correlation is used when
the researcher wishes to know whether or not the variables being studied
are correlated, if yes then what the strength of their association is.
Pearson's correlation coefficient is considered as the best correlation
measure. A functional relationship between two variables is established in
regression analysis, in order to make future projections on events.
When we talk about statistical measures and their research there are two
important concepts that come into play and they are correlation and
regression. This is a measure of multiple variables and hence is also called
the multivariate distribution. Correlation can be described as the analysis
which lets us know regarding the association or the absence of the
relationship between two variables such as ‘a’ and ‘b’.

Whereas on the other hand, regression analysis helps us to predict the


value of the dependent variable based on the value that is known of the
independent variable present after assuming about the average
mathematical relation between the two or more than two variables that
are present. Correlation and regression being an important chapter in
Class 12 it is important that students note the Difference Between
Correlation and Regression and learn about the same.

Advantage of Correlation Analysis:


Correlation analysis helps students to get a more clear and concise
summary regarding the relation between the two variables.

Advantage of Regression Analysis:


By using regression analysis one of the greatest advantages is that it
allows you to take a detailed look at the data and includes an equation
that can be used to predict and optimize the data set in the future.

What is Spectral Analysis in Research Methodology?


Spectral analysis is an important research tool for deciphering information in various
fields of science and technology. Spectral analysis is based on the Fourier theorem
which states that any waveform can be decomposed into a sum of sine waves at
different frequencies with different amplitudes and different phase relationships.

High-Precision Spectral Analysis Techniques


In this research, spectral analysis techniques were linked with several other methods,
namely, proportional interpolation, time-space domain transformation, and time-domain
refinement into the sampling method.

Arc Atomic Emission Spectral Analysis Method


Following this technique, analysis was carried out on the hair samples of a group of
patients in order to diagnose and also to restore the element balance in the body.
The research revealed that by comparing the elemental content in the human hair
with reference values, it is possible to assess the degree of element imbalance in the
body.

Spectral analysis also offers a rapid, accurate, versatile, and reliable method of
measuring the quality of both fresh and frozen fish by identifying and quantifying
specific contaminants and determining physical/chemical processes that indicate
spoilage. Spectrophotometric instrumentation has been recently used to monitor a
number of key parameters for quality checks, such as oxidative rancidity,
dimethylamine, ammonia, hypoxanthine, thiobarbituric acid, and formaldehyde
levels.

Entropy Spectral Analysis Methods


Entropy spectral analysis methods are applied for the forecasting of streamflow that
is vital for reservoir operation, flood control, power generation, river ecological
restoration, irrigation, and navigation. This method is used to study the monthly
streamflow for five hydrological stations in northwest China and is based on using
maximum Burg entropy, maximum configurational entropy, and minimum relative
entropy.

Overview of Error Analysis


What is an error?
An error is a form in learner language that is inaccurate, meaning it is different from the forms used by
competent speakers of the target language. For example, a learner of Spanish might say "Juana es *bueno,"
which is not what competent speakers of Spanish would say. The accurate form should be "buena."

What is error analysis?


Error analysis is a method used to document the errors that
appear in learner language, determine whether those errors are
systematic, and (if possible) explain what caused them. Native
speakers of the target language (TL) who listen to learner
language probably find learners' errors very noticeable,
although, as we shall see, accuracy is just one feature of learner
language.

While native speakers make unsystematic 'performance' errors


(like slips of the tongue) from time to time, second language
learners make more errors, and often ones that no native speaker ever makes. An error analysis should
focus on errors that are systematic violations of patterns in the input to which the learners have been
exposed. Such errors tell us something about the learner's interlanguage, or underlying knowledge of the
rules of the language being learned (Corder, 1981, p. 10).

How to do an error analysis


Although some learner errors are salient to native speakers, others, even though they’re systematic, may go
unnoticed. For this reason, it is valuable for anyone interested in learner language to do a more thorough
error analysis, to try to identify all the systematic errors. This can help researchers understand the
cognitive processes the learner is using, and help teachers decide which might be targeted for correction.
Researchers have worked out the following procedure for doing an error analysis Corder (1975).

1. Identify all the errors in a sample of learner language


For each error, what do you think the speaker intended to say, and how they should have said it? For
example, an English learner may say, "*He make a goal." This is an error. However, what should the
learner have said? There are at least two possible ways to reconstruct this error: (1) He MAKES a goal,
and (2) He IS MAKING a goal. In this first step of an error analysis, remember that there may be more
than one possible way to reconstruct a learner error. Tarone & Swierzbin (2009, p.25) offer another
example from an English language learner:

Learner: …*our school force us to learn English because um it’s, it’s a trend.

Here are three different possible reconstructions:

a. Our school forced us to learn English because it was a trend.


b. Our school required us to learn English because it was a popular language.
c. Because everyone felt it was important, English was a requirement at our school.

The way you reconstruct a learner error depends on what you think the intended message is. An added
complication is that any given learner utterance may contain errors at many levels at once: phonological,
morphological, syntactic, lexical.

Finally, determine how systematic the error is. Does it occur several times, or is it just a performance slip
(a mistake)? Even native speakers of a language make one-off mistakes when they're tired or distracted.

2. Explain the errors


Once you've identified systematic errors in your sample of learner language, think of what might have
caused those errors. There are several possibilities. Some errors could be due to native language transfer
(using a rule or pattern from the native language). Some could be developmental—errors most learners
make in learning this language no matter what their native language. Induced errors may be due to the way
a teacher or textbook presented or explained a given form. Communication strategies may be used by the
learner to get meaning across even if he or she knows the form used is not correct (Selinker 1972 discusses
these and other possible causes of systematic learner errors). Explaining errors in learner language isn't
always straightforward; for example, sometimes an error may appear to have more than one cause.
As Lightbown & Spada (2013, p. 45) say, "... while error analysis has the advantage of describing what
learners actually do … it does not always give us clear insights into why they do it."

What error analysis misses


Error analysis is a good first step, but it also can miss important features of learner language. First, in
focusing only on errors, you may miss cases where the learner uses the form correctly. For example, you
may notice that a learner makes errors in pronouncing a TL sound before consonants, but not notice that
she is producing the sound correctly before vowels. The second thing an error analysis misses
is avoidance. Schachter (1976) pointed out that learners can avoid using features of a TL that they know
they have difficulty with. For example, you may see very few errors in relative clauses in a sample of
English learner language, but then realize that's because the learner simply isn't producing many relative
clauses—correct OR incorrect. Avoidance can lead to the absence of errors—but absence of errors in this
case does NOT mean the learner has no problems with relative clauses. Finally, error analysis focuses only
on accuracy. Accuracy is just one of three ways of describing learner language: accuracy, complexity
and fluency. If teachers judge learner language only in terms of accuracy, the learners' development of
complexity and fluency can suffer.
UNIT – IV
Computers in Research
The computers are indispensable throughout the research process. The role of
computer becomes more importantwhen the research is on a large sample.
Data can be stored in computers for immediate use or can be stored inauxiliary
memories like floppy discs, compact discs, universal serial buses (pen drives) or
memory cards, so thatthe same can be retrieved later. The computers assist
the researcher throughout different phases of research process.
Role of Computers in the phases of research process
There are five major phases of the research process where computer plays
different vital roles. They are:
1) Role of Computer in Conceptual phase
2) Role of Computer in Design and planning phase
3) Role of Computer in Empirical phase
4) Role of Computer in Analytic phase and
5) Role of Computer in Dissemination phase
1) Role of Computer in Conceptual Phase
The conceptual phase consists of formulation of research problem, review of
literature, theoretical frame work andformulation of hypothesis.
Role of Computers in Literature Review:
Computers help for searching the literatures (for review of literature)and
bibliographic references stored in the electronic databases of the world wide
webs. It can thus be used forstoring relevant published articles to be retrieved
whenever needed. This has the advantage over searching theliteratures in the
form of books, journals and other newsletters at the libraries which consume
considerable amountof time and effort.
2) Role of Computers in Design and planning phase
Design and planning phase consist of research design, population, research
variables, sampling plan, reviewing research plan and pilot study.
Role of Computers for Sample Size Calculation:

Several software’s are available to calculate the sample size


required for a proposed study. NCSS-PASS-GESS is such software. The standard
deviation of the data from the pilot study is required for the sample size
calculation.
3)Role of Computers in Empirical phase
Empirical phase consist of collecting and preparing the data for analysis.
Data Storage:
The data obtained from the subjects are stored in computers as word files or
excel spread sheets.This has the advantage of making necessary corrections or
editing the whole layout of the tables if needed, whichis impossible or time-
consuming in case of writing in papers. Thus, computers help in data entry,
data editing, datamanagement including follow up actions etc. Computers also
allow for greater flexibility in recording the datawhile they are collected as well
as greater ease during the analysis of these data.In research studies, the
preparation and inputting data is the most labour-intensive and time
consuming aspect ofthe work. Typically the data will be initially recorded on a
questionnaire or record form suitable for its acceptance by the computer. To
do this the researcher in conjunction with the statistician and the programmer,
will convert thedata into Microsoft word file or excel spread sheet. These
spread sheets can be directly opened with statistical
software’s
for analysis.

4) Role of Computers in Data Analysis


This phase consist of statistical analysis of the data and interpretation of
results.
Data Analysis:
Much software is now available to perform the ‘mathematical part ‘of
the research process i.e. thecalculations using various statistical methods.
Software’s like SPSS, NCSS-PASS, STATA and Sysat are some ofthe widely used.
They can be like calculating the sample size for a proposed study, hypothesis
testing andcalculating the power of the study. Familiarity with any one package
will suffice to carry out the most intricatestatistical analyses.Computers are
useful not only for statistical analyses, but also to monitor the accuracy and
completeness of thedata as they are collected.
5) Role of Computers in Research Dissemination
This phase is the publication of the research study.
Research publishing:
The research article is typed in word format and converted to portable data
format (PDF)and stored and/or published in the World Wide Web.
Use of Computer in Data Processing and Tabulation
Research involves large amounts of data, which can be handled manually or by
computers. Computers provide the best alternative for more than one reason.
Besides its capacity to process large amounts of data, it also analysesdata with
the help of a number of statistical procedures. Computers carry out processing
and analysis of dataflawlessly and with a very high speed. The statistical
analysis that took months earlier takes now a few seconds orfew minutes.
Today, availability of statistical software and access to computers has
increased substantially overthe last few years all over the world. While there
are many specialised software application packages for differenttypes of data
analysis, Statistical Package for Social Sciences (SPSS) is one such package that
is often used byresearchers for data processing and analysis. It is preferred
choice for social work research analysis due to its easyto use interface and
comprehensive range of data manipulation and analytical tools.
Basic Steps in Data Processing and Analysis
There are four basic steps involved in data processingand analysis using SPSS.
They are:1) Entering of data into SPSS,2) Selection of a procedure from the
Menus,3) Selection of variables for analysis, and4) Examination of the
outputs.You can enter your data directly into SPSS DataEditor. Before data
analysis, it is advised that youshould have adetailed plan of analysis so that
youare clear as to what analysis is to be performed.Select the procedure to
workon the data. All the variables are listed each time a dialog box is
opened.Select variables on which you wish toapply a statisticalprocedure. After
completing the selection, executethe SPSS command. Most of the
commandsare directly executed by clicking ‘O.K’. on the dialog box. The
processor in the computer will execute the procedures and display the results
on the monitor as ‘output file’.
Use of statistical software SPSS, GRETL etc in research
Introduction:

Statistical software, or statistical analysis software, refers to tools that assist in the
statistics-based collection and analysis of data to provide science-based insights into
patterns and trends. They often use statistical analysis theorems and methodologies,
such as regression analysis and time series analysis, to perform data science.

Business intelligence is the practice of gathering and analysing data and then
transforming it into actionable insights. Statistical software help business intelligence in
many different ways. It adds more value to your business’ proprietary data. Statistics can
be challenging, but with the right BI tools, they can be easy. Hence it is always
necessary to select an appropriate tool for analysis. If you find difficulty with statistical
analysis, you can get help from Online data collection statistics tutoring services.

SPSS - STATISTICAL PACKAGE FOR THE SOCIAL SCIENCES

SPSS means “Statistical Package for the Social Sciences” and was first launched in
1968. Since SPSS was acquired by IBM in 2009, it’s officially known as IBM SPSS
Statistics but most users still just refer to it as “SPSS”.

SPSS is application software that is used as a statistical analytic tool in the area of social
science like competitor analysis, market research, surveys, and much more. SPSS is a
flexible and comprehensive statistical data management tool, and it is the most
renowned statistics package that can easily perform complex data analyzes and data
manipulation with ease. This designed for both non-interactive and interactive users.
Before proceeding to the details of the uses of SPSS, let’s check the features,
functionalities, and benefits of SPSS.

The features of SPSS

• It provides you various statistical capabilities.

• SPSS involves several editing tools and data management systems.

• It provides excellent reporting, plotting, and data presentation features.

Functionalities of SPSS

• Data Examination.

• General Linear Model.

• Correlation.
• ANOVA.

• Regressions.

• Cluster analysis.

• Time series.

• Graphics and graphical interface.

• Data Transformations.

• Descriptive statistics.

• Reliability tests.

• T-tests.

• Factor analysis.

• Probit analysis.

• Survival analysis.

Benefits of SPSS

• Effective data management: It makes the analysis of data quicker and easy as the
program of this tool knows the exact locations of the variables and cases. This also
reduces the manual workload of the programmers and users up to an extent.

• Wide range of options: It provides a wide range of charts, graphs, and methods to you.
SPSS also has better cleaning and screening options for the data that is used for further
analysis.

• Wide range of storage: The output of the SPSS tool remains separate from the other
data. Or we can say that it keeps the data output in separate folders and files.

Uses of SPSS

Data organization and collection

Most of the researchers use SPSS as a data collection tool. In SPSS, the data entry
screen seems similar to other spreadsheet software. One can enter data and variables
quantitatively further; you can save the files as the data files. Besides this, one can
manage their data in SPSS with the help of assigned properties of several variables.

Let’s take an example of it, one can easily characterize a single variable as a nominal
variable, and that particular information can be stored in SPSS. Now, when you access
the specific data file in the future that could be in weeks, months, or even in years, then
you will able to find that the data will be in the same manner as you have managed it
earlier.
Data output

In SPSS, when you collect and enter the data into the datasheet, one can easily
generate the output files from the data. Because of this function, data output can be
considered to be one of the best uses of SPSS. Let’s take an example of it, and one can
generate frequency distributions of the large data to know whether the data set is
distributed normally, or not.

The supplied frequency data distribution can be displayed as an output file. In SPSS,
you can easily transfer the data from the output files to the research articles that you are
writing. Therefore, there is no need to recreate a graph or tables, and you can directly
use them from the SPSS’s output data files.

SPSS & Research and Data Analysis Programs:

SPSS is revolutionary software mainly used by research scientists which help them
process critical data in simple steps. Working on data is a complex and time consuming
process, but this software can easily handle and operate information with the help of
some techniques. These techniques are used to analyze, transform, and produce a
characteristic pattern between different data variables. In addition to it, the output can be
obtained through graphical representation so that a user can easily understand the
result.

Read below to understand the factors that are responsible in the process of data
handling and its execution.

1. Data Transformation: This technique is used to convert the format of the data. After
changing the data type, it integrates same type of data in one place and it becomes easy
to manage it. You can insert the different kind of data into SPSS and it will change its
structure as per the system specification and requirement. It means that even if you
change the operating system, SPSS can still work on old data.

2. Regression Analysis: It is used to understand the relation between dependent and


interdependent variables that are stored in a data file. It also explains how a change in
the value of an interdependent variable can affect the dependent data. The primary need
of regression analysis is to understand the type of relationship between different
variables.

3. ANOVA ( Analysis of variance): It is a statistical approach to compare events, groups


or processes, and find out the difference between them. It can help you understand
which method is more suitable for executing a task. By looking at the result, you can find
the feasibility and effectiveness of the particular method.

4. MANOVA ( Multivariate analysis of variance): This method is used to compare data of


random variables whose value is unknown. MANOVA technique can also be used to
analyze different types of population and what factors can affect their choices.

5. T-tests: It is used to understand the difference between two sample types, and
researchers apply this method to find out the difference in the interest of two kinds of
groups. This test can also understand if the produced output is meaningless or useful.

STATA and its Importance in Data Analysis


Stata is one of the most common and widely used statistical software among
researchers in the world. Researchers use Stata in the field of economics, biomedicine,
and political science. It’s powerful statistical software that permits users to manage,
analyze, and generate graphical visualizations of data. Researchers use researchers in
various fields like biomedicine, economics, and political science to inspect data patterns.
It supports both a graphical user interface and command-line, making the use of the
software more intuitive.

Over the past few years, STATA programs of scholarly articles indexed in Google
Scholar have raised over 55 per cent. Regardless of industry and field, this software
tool’s value increased, it becomes one of the most important future employment assets.
It becomes one of the necessary qualifications which employers look, among their
candidates. Our Statistics Tutoring Services help you with all aspect of understanding
and working in particular software.

Because of its rising popularity, the Department of Economics at American University


has chosen Stata as the primary statistical software used among core and elective
economics courses. Among its numerous capabilities, it comprises some built-in
commands to clean and manage data. It engages with fundamental statistical analysis,
executes advanced econometric procedures, including time-series regression models
and panel, and creates visually stunning graphs and tables.

Stata enables one to write their own code or use menus to achieve their analysis. It
supports importing data in various formats, including CSV and spreadsheet (including
Excel) formats. Its file formats are platform-independent, allowing users of various
operating systems to share their datasets and programs easily. Four different versions of
Stata in the market. They are

 Stata/MP- for multiprocessor computers


 Stata/IC – This is the slandered and commonly used version
 Stata/SE – which support large databases
 Small Stata- which is a smaller version used for educational purpose. (Which
helps students, using this software throughout the program) .If you are
unfamiliar with this particular software, you can get help from Stata Tutor
Online.

Applications of Stata

 Stata has a user-friendly graphical user interface. The best feature of its user
interface is, it can be easily adapt among various users regardless of their
experiences.

 It also has data management features. With the help of Stata, you can easily
connect the data sets and reshape them.
 Stat’s graphical user interface (GUI) includes menus and dialogue boxes.
Through these dialogue boxes, users can access various useful features,
such as data management, data analysis, and statistical analysis. The details,
graphics, and statistical menus are all easily accessible.

GRETL
 gretl is an acronym for Gnu Regression Econometrics and Time-series Library
 it is free econometrics software
 it has an easy Graphical User Interface (GUI)
 it runs least-squares, maximum-likelihood, systems estimators...
 it outputs results to several formats very important for us in this course: it admits scripts
(sequence of commands saved in a le)
How do I get gretl?
 can be downloaded from http://gretl.sourceforge.net and installed on your personal
computer it runs on Windows, Mac, Linux

gretl is an open-source statistical package, mainly for econometrics. The name is an acronym
for Gnu Regression, Econometrics and Time-series Library.
It has both a graphical user interface (GUI) and a command-line interface. It is written in C,
uses GTK+ as widget toolkit for creating its GUI, and calls gnuplot for generating graphs. The
native scripting language of gretl is known as hansl (see below); it can also be used together
with TRAMO/SEATS, R, Stata, Python, Octave, Ox and Julia.
It includes natively all the basic statistical techniques employed in contemporary Econometrics
and Time-Series Analysis. Additional estimators and tests are available via user-
contributed function packages, which are written in hansl.[2] gretl can output models
as LaTeX files.
Besides English, gretl is also available
in Albanian, Basque, Bulgarian, Catalan, Chinese, Czech, French, Galician, German, Greek, Itali
an, Polish, Portuguese (both varieties), Romanian, Russian, Spanish, Turkish and Ukrainian.
Gretl has been reviewed several times in the Journal of Applied Econometrics[3][4][5] and, more
recently, in the Australian Economic Review.[6]
A review also appeared in the Journal of Statistical Software[7] in 2008. Since then, the journal
has featured several articles in which gretl is used to implement various statistical techniques.

Supported data formats


gretl offers its own fully documented, XML-based data format.
It can also import ASCII, CSV, databank, EViews, Excel, Gnumeric, GNU
Octave, JMulTi, OpenDocument spreadsheets, PcGive, RATS 4, SAS xport, SPSS,
and Stata files. Since version 2020c, the GeoJSON and Shapefile formats are also supported, for
thematic map creation.
It can export to Stata, GNU Octave, R, CSV, JMulTi, and PcGive file formats.

Genetic Algorithms
Genetic Algorithm (GA) is a search-based optimization technique based on the principles
of Genetics and Natural Selection. It is frequently used to find optimal or near-optimal
solutions to difficult problems which otherwise would take a lifetime to solve. It is frequently
used to solve optimization problems, in research, and in machine learning.

Basic Terminology
Before beginning a discussion on Genetic Algorithms, it is essential to be familiar
with some basic terminology which will be used throughout this tutorial.
 Population − It is a subset of all the possible (encoded) solutions to the given
problem. The population for a GA is analogous to the population for human
beings except that instead of human beings, we have Candidate Solutions
representing human beings.
 Chromosomes − A chromosome is one such solution to the given problem.
 Gene − A gene is one element position of a chromosome.
 Allele − It is the value a gene takes for a particular chromosome.

 Genotype − Genotype is the population in the computation space. In the


computation space, the solutions are represented in a way which can be
easily understood and manipulated using a computing system.
 Phenotype − Phenotype is the population in the actual real world solution
space in which solutions are represented in a way they are represented in
real world situations.
 Decoding and Encoding − For simple problems, the phenotype and
genotype spaces are the same. However, in most of the cases, the
phenotype and genotype spaces are different. Decoding is a process of
transforming a solution from the genotype to the phenotype space, while
encoding is a process of transforming from the phenotype to genotype
space. Decoding should be fast as it is carried out repeatedly in a GA during
the fitness value calculation.
 Fitness Function − A fitness function simply defined is a function which
takes the solution as input and produces the suitability of the solution as the
output. In some cases, the fitness function and the objective function may be
the same, while in others it might be different based on the problem.
 Genetic Operators − These alter the genetic composition of the offspring.
These include crossover, mutation, selection, etc.

 Basic Structure
 The basic structure of a GA is as follows −
 We start with an initial population (which may be generated at random or
seeded by other heuristics), select parents from this population for mating.
Apply crossover and mutation operators on the parents to generate new off-
springs. And finally these off-springs replace the existing individuals in the
population and the process repeats. In this way genetic algorithms actually try
to mimic the human evolution to some extent.
 Each of the following steps are covered as a separate chapter later in this
tutorial.
Fuzzy Logic Systems
Fuzzy Logic Systems (FLS) produce acceptable but definite output in response to
incomplete, ambiguous, distorted, or inaccurate (fuzzy) input.

What is Fuzzy Logic?


Fuzzy Logic (FL) is a method of reasoning that resembles human reasoning. The
approach of FL imitates the way of decision making in humans that involves all
intermediate possibilities between digital values YES and NO.
The conventional logic block that a computer can understand takes precise input
and produces a definite output as TRUE or FALSE, which is equivalent to human’s
YES or NO.
The inventor of fuzzy logic, Lotfi Zadeh, observed that unlike computers, the human
decision making includes a range of possibilities between YES and NO, such as −
CERTAINLY
YES

POSSIBLY YES

CANNOT SAY

POSSIBLY NO

CERTAINLY NO

The fuzzy logic works on the levels of possibilities of input to achieve the definite
output.
Implementation
 It can be implemented in systems with various sizes and capabilities ranging
from small micro-controllers to large, networked, workstation-based control
systems.
 It can be implemented in hardware, software, or a combination of both.

Why Fuzzy Logic?


Fuzzy logic is useful for commercial and practical purposes.

 It can control machines and consumer products.


 It may not give accurate reasoning, but acceptable reasoning.
 Fuzzy logic helps to deal with the uncertainty in engineering.

Fuzzy Logic Systems Architecture


It has four main parts as shown −
 Fuzzification Module − It transforms the system inputs, which are crisp
numbers, into fuzzy sets. It splits the input signal into five steps such as −

LP x is Large Positive

MP x is Medium Positive

S x is Small

MN x is Medium Negative

LN x is Large Negative
 Knowledge Base − It stores IF-THEN rules provided by experts.
 Inference Engine − It simulates the human reasoning process by making
fuzzy inference on the inputs and IF-THEN rules.
 Defuzzification Module − It transforms the fuzzy set obtained by the
inference engine into a crisp value.

Algorithm
 Define linguistic Variables and terms (start)
 Construct membership functions for them. (start)
 Construct knowledge base of rules (start)
 Convert crisp data into fuzzy data sets using membership functions. (fuzzification)
 Evaluate rules in the rule base. (Inference Engine)
 Combine results from each rule. (Inference Engine)
 Convert output data into non-fuzzy values. (defuzzification)
Development
Step 1 − Define linguistic variables and terms
Linguistic variables are input and output variables in the form of simple words or
sentences. For room temperature, cold, warm, hot, etc., are linguistic terms.
Temperature (t) = {very-cold, cold, warm, very-warm, hot}
Every member of this set is a linguistic term and it can cover some portion of overall
temperature values.
Step 2 − Construct membership functions for them
The membership functions of temperature variable are as shown −
Step3 − Construct knowledge base rules
Create a matrix of room temperature values versus target temperature values that
an air conditioning system is expected to provide.

RoomTemp. /
Very_Cold Cold Warm Hot Very_Hot
Target

Very_Cold No_Change Heat Heat Heat Heat

Cold Cool No_Change Heat Heat Heat

Warm Cool Cool No_Change Heat Heat

Hot Cool Cool Cool No_Change Heat

Very_Hot Cool Cool Cool Cool No_Change

Build a set of rules into the knowledge base in the form of IF-THEN-ELSE
structures.

Sr. No. Condition Action

1 IF temperature=(Cold OR Very_Cold) AND target=Warm THEN Heat

2 IF temperature=(Hot OR Very_Hot) AND target=Warm THEN Cool


3 IF (temperature=Warm) AND (target=Warm) THEN No_Change

Step 4 − Obtain fuzzy value


Fuzzy set operations perform evaluation of rules. The operations used for OR and
AND are Max and Min respectively. Combine all results of evaluation to form a final
result. This result is a fuzzy value.
Step 5 − Perform defuzzification
Defuzzification is then performed according to membership function for output
variable.

Application Areas of Fuzzy Logic


The key application areas of fuzzy logic are as given −
Automotive Systems

 Automatic Gearboxes
 Four-Wheel Steering
 Vehicle environment control
Consumer Electronic Goods

 Hi-Fi Systems
 Photocopiers
 Still and Video Cameras
 Television
Domestic Goods
 Microwave Ovens
 Refrigerators
 Toasters
 Vacuum Cleaners
 Washing Machines
Environment Control

 Air Conditioners/Dryers/Heaters
 Humidifiers

Advantages of FLSs
 Mathematical concepts within fuzzy reasoning are very simple.
 You can modify a FLS by just adding or deleting rules due to flexibility of
fuzzy logic.
 Fuzzy logic Systems can take imprecise, distorted, noisy input information.
 FLSs are easy to construct and understand.
 Fuzzy logic is a solution to complex problems in all fields of life, including
medicine, as it resembles human reasoning and decision making.

Disadvantages of FLSs
 There is no systematic approach to fuzzy system designing.
 They are understandable only when simple.
 They are suitable for the problems which do not need high accuracy.

You might also like