Assignment Quantitative Technique for Management

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

LEARNER NAME HRIDAYANAND GOGOI

ROLL NO 2214500255
PROGRAM BACHELOR OF BUSINESS ADMINISTRATION (BBA)
SEMESTER III
COURSE NAME QUANTITATIVE TECHNIQUES FOR MANAGEMENT

COURSE CODE DBB2102


Set – 1 Questions
1. (a) Describe limitations of Statistics. Also explain how Statistics is useful in
accountancy and auditing.

Limitations of Statistics – Even though statistics is very useful in many points of a human
knowledge but has limitations as mentioned below –

 Statistics does not deal with individual values - Statistics mainly focus on patterns
and trends inside groups and not at individual and distinct data points. It aims in
understanding the behaviour of a population by analysis of characteristics as in
averages, variability and inter-relationship amongst the variables. Hence a single
individual value which is in isolation cannot offer ample statistical insight.
 Statistics do not deal with qualitative characteristics - Statistics mainly deals with
quantitative data and it does not effectively apprehends or analyse qualitative
characteristics like the colour, emotion and the subjective experience which limits the
scope in few circumstances.
 Statistical conclusions are not usually true – The data can be used for interpretations
and may not be true always. The conclusions may differ if any wrong data input is given
and hence we had to be very careful on the same and had to take expert suggestions for
the same.
 Experts required – To interpret the correct data experts are required. And for exact
assumptions and predictions needs extensive training and expertise.

Use of statistics in accountancy and auditing – Both for accountancy & auditing the use of
statistics improves the reliability, correctness and efficacy of financial reporting and its
analysis. Statistics provides a quantitative base for better decision making and supports
professionals to pilot the complications of financial supervision and assurance.
In Accountancy –
1. Financial Analysis – For the analysis of account statements and financial health of the
organisation.
2. Forecasting and Budget Planning – Old historical data are accessed to forecast the set
new budget. This involves study of old patterns and performance of the project via
statistics
3. Integrity of internal controls – Sampling methods are used to check the effectiveness of
the internal control.

In Auditing –
1. Assessment of Risk – The tools of statistics helps the experts or the auditors to check
the past pattern and performance and identify the anomalies and probable errors or the
chances of fraud.
2. Detection of fraud - Statistics helps the auditors in finding out the unusual patterns or
defaults in financial data which may illustrate fraudulent activities. Data analytics and
statistical models improves the skill of uncovering the irregularities.
3. Prediction and Analysis – Advanced statistical models helps in predicting the future
trends and may help in identifying potential challenges and helps in overall efficacy of
the audit process.
1. (b) State the meaning of a questionnaire. What are the precautions
necessary in drafting a good questionnaire?

Questionnaire - A questionnaire can be explained as the organized set of written or verbal


questions which are designed to gather data, estimations or information from individuals or
respondents. It is a research or investigation tool used in surveys, studies or assessments to
collect standardized data thus helping in systematic analysis on detailed topics or subjects.
Precautions necessary in drafting a good questionnaire –
 Questionnaire has to short and precise
 Good if the questionnaire is answered in “Yes” and “No”.
 The units for which information needed has to be precise and clear
 Has to systematic in the flow of the questions
 Pilot Testing – Need to conduct a small-scale pilot test to identify potential issues
 Correct Length – Need to maintain the questionnaire concise to maintain respondent
interested and prevent any kind of survey exhaustion.
 Privacy Assurance: Openly it has to be communicated that all the responses will be kept
confidential. This helps the respondents to give honest and straight feedback deprived
of fear.
Q.No Set – 1 Questions

2. (a) The average daily wage of 100 workers in a factory is Rs. 72. The average
daily wage of 70 male workers is Rs. 75. Find the average daily wage of
female workers.

As average of 100 workers daily is 72 so overall wage for 100 workers is 100X72 = 7200/-
Now, Daily Avg of male worker is 75 and so overall wage for 70 male workers daily is
75X70 = 5250/-
So, total wage of the female workers is 7200 – 5250 = 1950
And Since total worker are 100 and male worker are 70
So, number of female workers are 100-70 = 30
So, average wage of female worker is 1950/30 = 65
Answer is 65

Q.No Set – 1 Questions


2. (b) Write Short note on:
i) Mean Deviation
ii) Coefficient of Mean Deviation
iii) Standard Deviation
iv) Coefficient of Variation
v) Quartile Deviation

i) Mean Deviation – Mean Deviation, also known as the Average Deviation, is a


measure of the dispersion or spread of a set of values around their mean (average).
It quantifies the average distance between each data point and the mean of the
dataset.
ii) Coefficient of Mean Deviation - The Coefficient of Mean Deviation is a relative
measure that expresses the mean deviation as a percentage of the mean. It provides
a standardized way to compare the variability of different data sets, especially when
they have different units or scales.

iii) Standard Deviation - Standard Deviation is a statistical measure that quantifies the
amount of variation or dispersion in a set of values.

iv) The Coefficient of Variation – The relative measure of variability that expresses
the standard deviation as a percentage of the mean. It is particularly useful for
comparing the relative dispersion of datasets with different units or scales.

v) Quartile Deviation - This also known as Semi-Interquartile Range and is a measure


of statistical dispersion that quantifies the spread of a dataset by considering the
range of the middle 50% of the data. It is calculated as half the difference between
the upper quartile (Q3) and the lower quartile (Q1).
Q.No Set – 1 Questions

3. (a) Obtain the correlation coefficient for the data given below:
X: 1 2 3 4 5 6 7 8 9
Y: 9 8 10 12 11 13 14 16 15

Answer 0.95 (Strong Correlation Coefficient)


3 (b) What are the uses of Regression Analysis? Give five examples where the use of
regression analysis can beneficially be made.

Regression analysis is a powerful statistical tool to find the relationships between a dependent
variable (which is to be predict) and one or more independent variables (which might influence
it). It fits a line or model to the data and revealing how change in one variable can impact the
other variable. Though it has many potential uses but below are the 5 on the same –
1. Economics – It can be used to analyse the economy by finding out the relationship
between the variables like Inflation, rate of interests and spending power of the
customer and thus providing the understanding of the economic forecasting.
2. Marketing – This analysis may be used to find out the impact of the expenditure on
sales and helps in planning the marketing activities and the allocation of the available
funds.
3. Healthcare - Regression models are able to envisage the health outcomes on basis of
the factors like persons demographics, his lifestyle and the old medical history there by
aiding in more personalized medicine and risk calculation.
4. Finance - This analysis is very much valued in finance to know the relationship
amongst variables like stock price, interest rate and the economic pointers for stock
purchases or sale out judgement making.
5. Education - Regression can examine the effect of numerous aspects on academic
performance and helps the educators to identify important factors and then engage the
targeted interventions for perfection.
Q.No Set – 2 Questions
4 What do you mean by Time Series? Describe the various methods of Secular Trends.

A time series is a chronological gathering of data points measured or recorded over a regular
spaced time intervals. All data points in a time series relates to time specific moment and
creates a sequential order. Time series analysis involves observing and modeling the patterns,
trends and performances inherent in the data to understand temporal dynamics and make
predictions or detect any kind of anomalies. The mostly used applications are forecasting future
values, monitoring trends and analyzing historical patterns in fields like finance, economics,
and environmental science. Some day to day examples of time series -
 Temperature: Monitoring the daily high and low temperatures for the year.
 Stock prices: Observing the stock price fluctuations in terms of minutes days and years.
 Website traffic: Checking the visitors logged into a website each hour, day or in a week.
 Heart rate: Observing the person heartbeat in minutes for the entire duration of the day.

Various methods of Secular Trends- There are four methods of Secular Trends as mentioned
below -

 The freehand or graphic method – This involves visually plotting data points on a
graph without using formal mathematical models. Then the experts or the analyst draw
a line which represents the observed trend the most. This method is intuitive and
provides a rapid indication of drifts in data and in the same time may lack the exactness
and quantification obtainable by formal statistical techniques.

 Semi-average method - Here, the entire series is divided in 2 halves and the average
for each half is considered. If data is taken for a even number or years, then the same
can be effortlessly divided in two halves. And if this is for a odd total of years, then the
middle year is left of the time series and other two halves constitute the periods on each
side of the middle year. The average for a half is taken to be representative of the value
corresponding to the midpoint of the time interval of that half. We thus get two points.
These two points are plotted on a graph and are joined by straight line which provides
us the required trend line.

 Moving Average Method - The method helps identify trends by filtering out short-
term variations that can obscure the bigger picture. Calculations are based on a set of
adjacent data points that move along the time series like a window. Simple moving
averages give equal weight to all data points while weighted moving averages assign
greater importance to specific points depending on the chosen weights.

 Least square method of fitting mathematical curves – This is widely used method
and gives us with a mathematical device to obtain an objective fit to the trend of a
given time series.
The main equation for the same is Yc = a +bX
Where Yc denotes the trend value of the dependent variable i.e., of the Y series.
X is the independent variable i.e.. time series of X series.
The constants a denotes the value of Yc when X = 0 and b denoting the change of Yc
for a unit change in X variable.
In order to determine the value of the constants a & b the following normal equations
are to be solved -

Σ𝑌=𝑛𝑎+ 𝑏Σ𝑋 Σ𝑋𝑌= aΣ𝑋+𝑏Σ𝑋2


Q.No Set – 2 Questions
5 Define Index Numbers. Describe various test for consistency of Index Numbers.

Index numbers are special averages which are used to find the relative change in a variable or
group of variables over time. This are mostly used in economics and finances, this index
numbers help in analyzing the trends and compare different time periods and make relative
calculations.
They express the value of a variable in relation to a base period or base value set to 100. For
example the stock market prices or the inflation etc.

Below are the major test for consistency of Index Numbers –

 Unit Test- Here the test requires the formula to be free of units. It involves evaluating
its accuracy and reliability in reflecting changes in a variable. This may include
comparing the index values with actual data, assessing the chosen base period's
appropriateness.

 Time Reversal Test- This test is important when choosing and interpreting index
numbers. It helps to ensure that the chosen formula provides a fair and consistent
measure of change over time and is very crucial test for verifying the validity of an
index number formula. For example if we calculate the index using Year 2023 as the
base and Year 2022 as the comparison, and then calculate it again but with 2022 as the
base and 2023 as the comparison, both results should be identical.

 Factor Reversal Test – The Factor Reversal Test is a statistical assessment used in
index number analysis to evaluate the sensitivity of an index to changes in the weights
assigned to its components. It involves reversing the weights of the index and
comparing the resulting index with the original. A successful index should demonstrate
consistency and stability even when its components' weights are reversed and affirming
the reliability of the chosen weighting scheme and the robustness of the index
calculation method.

 Circular Test - The Circular Test is a statistical evaluation applied to index numbers
to assess their reliability and consistency. In this test, the index is calculated for multiple
periods using each period as the base year in turn. Theoretically, the resulting indices
should form a circular pattern, returning to the initial base period. A successful circular
test indicates that the index number is consistent over different base periods, validating
its stability and suitability for making meaningful comparisons and trend analyses
across various time frames.
Q.No Set – 2 Questions
6 (a) Delineate the principles of sampling methods. Explain sampling and non-sampling
errors.
Principles of Sampling Methods-
 Representation: The sample should accurately reflect the characteristics of the entire
population it's drawn from. This ensures conclusions drawn from the sample can be
reliably generalized to the entire population.
 Random in nature: Every element in the population should have an equal chance of
being selected in the sample. This minimizes bias and ensures the sample is not skewed
towards certain groups within the population.
 Adequacy: Adequate size to provide statistically reliable results. Larger samples are
generally more accurate but also require more resources. Determining the optimal
sample size depends on factors like the desired level of precision and the expected
variability within the population.
 Efficiency: The sampling method should be chosen to be cost-effective and practical
to implement. This includes considering factors like the availability of data, access to
the population, and the skills required to conduct the sampling.
 Ethical considerations: Sampling methods should be conducted ethically and ensuring
respect for the privacy and rights of individuals within the population. This may involve
obtaining informed consent, protecting confidentiality and avoiding any discriminatory
practices in selecting the sample.

Sampling and non-sampling errors.

 Sampling Error - Sampling error refers to the discrepancy between the characteristics
of a sample and the corresponding attributes of the entire population from which the
sample was drawn. It occurs due to the inherent variability in selecting a subset rather
than the entire population. Reducing sampling error involves employing random and
representative sampling methods, increasing sample size, and using statistical
techniques. Recognizing and minimizing sampling error is crucial for ensuring the
generalizability and accuracy of research findings to the broader population.

 Non-Sampling errors- Non-sampling errors in research refer to inaccuracies that occur


outside the sampling process, affecting the quality and reliability of study results.
Examples include coverage errors, arising from incomplete or inaccurate sampling
frames; non-response errors, caused by refusal to participate; measurement errors,
resulting from inaccuracies in data collection tools; and processing errors, occurring
during data entry or analysis. Minimizing non-sampling errors requires careful study
design, thorough training of personnel, and diligent data validation processes to
enhance the accuracy and validity of research findings
Q.No Set – 2 Questions
6 (b) What role does statistical quality control play in maintaining the quality of a product?
Describe the advantages of statistical quality control.

Statistical quality control plays a very major role in ensuring the quality of the product. It is
crucial for maintaining product consistency, reducing defects, improving efficiency and
ensuring customer satisfaction in manufacturing processes. SQC originated in the early 20th
century with the pioneers like Walter Shewhart laying the foundation for its principles.

The role of Statistical Quality Control (SQC) is vital in guaranteeing the constancy and
trustworthiness of the manufacturing processes. By engaging statistical techniques SQC
supports detecting and addressing the discrepancies in production thus avoiding defects and
improving overall product quality. It plays a crucial role in achieving and maintaining desired
standards, reducing waste and improving the efficiency. SQC also helps in taking cognisant
decisions by giving data driven comprehensions in to process performance. It involves
statistical tools such as control charts, histograms and Pareto analysis to monitor and control
processes effectively. Thus SQC contributes to customer satisfaction, brand reputation and cost
savings and thereby making it an indispensable tool for businesses committed to delivering
high-quality products and maintaining a competitive edge in the market.

Below are the major advantages of statistical quality control-

 This is a system that gives a constant inspection at various stages during the
manufacturing process of the product.
 Helps in the quality control of the product.
 Reduces the inspection of the manufactured product by 100%
 Reduces wastage of the due to the quality control at every step of manufacturing.
 This is simple technique and can be operated by the less skilled manpower as well.
 Bad quality is easily identified and can be helpful to minimize loss and building a good
brand image.
 Old stored data can be used as reference for the future projects or train the team.
 Good Possibility of defending the quality product with the governmental agencies on
the base of SQC registers.

You might also like