Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

BR2- Trim 3.

Q1

1.Descriptive Statistics

I Restaurant II Restaurant III Restaurant

175.3333 183.3333 194.0833


Mean 333 Mean 333 Mean 333
12.92069 12.33087 12.71032
Standard Error 83 Standard Error 609 Standard Error 647
Median 165 Median 180 Median 188
Mode 152 Mode 193 Mode #N/A
Standard 44.75861 Standard 42.71540 Standard 44.02986
Deviation 183 Deviation 777 Deviation 246
2003.333 1824.606 1938.628
Sample Variance 333 Sample Variance 061 Sample Variance 788
- - -
1.138063 1.180578 0.922537
Kurtosis 636 Kurtosis 547 Kurtosis 199
0.357067 0.218575 0.291222
Skewness 563 Skewness 561 Skewness 988
Range 136 Range 125 Range 139
Minimum 110 Minimum 122 Minimum 126
Maximum 246 Maximum 247 Maximum 265
Sum 2104 Sum 2200 Sum 2329
Count 12 Count 12 Count 12
Confidence 28.43826 Confidence 27.14007 Confidence 27.97523
Level(95.0%) 521 Level(95.0%) 528 Level(95.0%) 994

Steps for descriptive stats on excel: datadata analysisdescriptive stats select data (total
demand)select output cellOK
2.

Months Restaurant 1 Alpha (0.5) RMSE (0.5) Alpha (0.7) RMSE(0.7)

January 200 #N/A #N/A


February 235 #N/A #N/A #N/A #N/A

march 232 200 #N/A 200 #N/A


April 178 217.5 #N/A 224.5 #N/A

may 184 224.75 34.74100603 229.75 36.32865


June 140 201.375 29.98723687 193.525 30.686795

July 145 192.6875 41.88645677 186.8575 40.679288


Aug 152 166.34375 34.3191829 154.05725 28.097359

Sept 110 155.67188 32.88875687 147.717175 27.664669


Oct 130 153.83594 28.22899069 150.7151525 24.208125

Nov 152 131.91797 25.42145214 122.2145458 24.060199


Dec 246 130.95898 28.09503135 127.6643637 27.752238

January   141.47949 255.5681136 144.6993091 239.47732

Restaurant 1 Exponential Smoothing


Alpha 0.5
300
200
Value

100
0 For restaurant 1 The RMSE
1 2 3 4 5 6 7 8 9 10 11 12
Data Point for Exponential analysis with
alpha (0.7) is less than RMSE
of alpha (0.5). hence the
Actual Forecast
forecasted value for January
would be 144.7
Restaurant 2

months II Restaurant Alpha (0.5) RMSE (0.5) Alpha (0.7) RMSE(0.7)

January 203 #N/A #N/A


February 238 #N/A #N/A #N/A #N/A

march 247 203 #N/A 203 #N/A


April 193 220.5 #N/A 227.5 #N/A

may 193 233.75 34.582329 241.15 36.164773


June 149 213.375 30.429991 207.445 31.130534

July 157 203.1875 40.873742 197.3335 40.262504


Aug 161 176.09375 35.194683 163.50005 29.365716

Sept 122 166.54688 33.324787 158.95002 28.181438


Oct 130 163.77344 26.710542 160.385 22.508233

Nov 167 142.88672 25.441799 133.5155 22.285793


Dec 240 136.44336 30.793917 131.05465 30.429347

 January   151.72168 257.35179 156.2164 240.32834

Restaurant 2 Exponenti al Smoothing


Alpha 0.7
300
200
Value

100
0
1 2 3 4 5 6 7 8 9 10 11 12
Data Point

Actual Forecast

Restaurant 2 Exponenti al Smoothing


Alpha 0.5
300
200
Value

100
0
1 2 3 4 5 6 7 8 9 10 11 12
Data Point

Actual Forecast
For Restaurant 2 1The RMSE for Exponential analysis with alpha (0.7) is less than RMSE of alpha
(0.5). hence the forecasted value for January would be 156.21

Restaurant 3

Alpha RMSE Alpha


months III Restaurant
(0.5) (0.5) (0.7) RMSE(0.7)

January 202   #N/A   #N/A


February 255 #N/A #N/A #N/A #N/A

march 265 202 #N/A 202 #N/A


April 205 228.5 #N/A 239.1 #N/A

44.28811
may 210
246.75 9 257.23 45.48913
33.30329
June 160
225.875 4 220.669 34.21798

42.23676
July 166
217.9375 1 213.2007 43.482238
37.13189
Aug 174
188.96875 6 175.96021 31.85042

36.03913
Sept 126
177.48438 7 168.98806 31.382797
31.69642
Oct 148
175.74219 2 172.49642 27.60581
Nov 173 150.87109 28.83672 139.94893 27.397423
31.82143
Dec 245
149.43555 9 145.58468 31.508394

285.3537
January  
161.21777 9 164.7754 272.93419
Restaurant 3 Exponenti al Smoothing
Alpha 0.5

200
Value

0
1 2 3 4 5 6 7 8 9 10 11 12
Data Point

Actual Forecast

Restaurant 3 Exponenti al Smoothing


Alpha 0.7

300
200
Value

100
0
1 2 3 4 5 6 7 8 9 10 11 12
Data Point

Actual Forecast

For Restaurant 3 The RMSE for Exponential analysis with alpha (0.7) is less than RMSE of alpha (0.5).
hence the forecasted value for January would be 164.774

Steps for exponential smoothing & RMSE on excel: datadata analysisdescriptive stats
exponential smoothinginput datadamping factor = 1- αoutput cellcheck chart output &
standard errorsOK

4. To better see the merging patterns, the smooth series data has been altered using an algorithm to
remove unnecessary noise. Other types of time series data are seasonal and cyclical, and the noise in
these data series has not been removed through smoothing.

5. Simple Moving Average (SMA) Another method of smoothing data is smoothing. SMA weights all
available data in the sample equally, whereas exponential smoothing weights more recent data in
the series. The dampening factor determines the amount of weightage given.
3

Case Summary
Cases
Valid Missing Total
N Percent N Percent N Percent
a
$analysis 114 100.0% 0 0.0% 114 100.0%
a. Dichotomy group tabulated at value 1.

Total number of correct response is 114 out of 114

$Response Frequencies
Responses Percent of
N Percent Cases
a
$Response Aptitudetest 15 7.0% 13.2%
ResumebasedInterview 19 8.9% 16.7%
TechnicalInterview 5 2.3% 4.4%
GroupInterview 45 21.1% 39.5%
Psychometricassessment 30 14.1% 26.3%
GroupDiscussions 41 19.2% 36.0%
Extempore 48 22.5% 42.1%
BusinesscaseIndustryanalysi 10 4.7% 8.8%
s
Total 213 100.0% 186.8%
a. Dichotomy group tabulated at value 1.

Interpretation:

Total 213 response were given by 114 respondents.

We observe that for Aptitude test 13.2% of 114 respondents felt it is redundant.

We observe that for Resume based Interview 16.7% of 114 respondents felt it is redundant.

We observe that for Technical interview 2.3% of 114 respondents felt it is redundant.

We observe that for Group Interview 21.1% of 114 respondents felt it is redundant.

We observe that for Psychometric assessment 14.1% of 114 respondents felt it is redundant.

We observe that for Group Discussion 19.2% of 114 respondents felt it is redundant.

We observe that for Extempore 22.5% of 114 respondents felt it is redundant.

We observe that for Business Case/Industry analysis 4.7% of 114 respondents felt it is redundant.

Steps for multiple response analysis: Code the given responses as 1= Yes & 0 = NoCopy data into
SPSSRename variables & labelsAnalyseMultiple responseDefine variable setcopy entire
data in variables in setDichotomies count value = 1Enter nameaddrepeat process & select
frequencies under multiple responsesselect dataOK

4)

Factor analysis is used for reduction and summarization of data.

In the given case a cigarette manufacturer is interested in determining which variables his potential
customers think about when they consider his product. The sample size was 124, which comprised
male smokers only. The respondents were asked the following questions and were requested to rate
them on a 5-point scale (1 = completely agree, 5 = completely disagree). The answers given by the
respondents were related to their perceptions and some attributes of the cigarettes. Exploratory
Factor analysis is performed using Principal component method and factors with Eigen values more
than 1 are extracted and the following output is generated:

KMO and Bartlett's Test


Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .561

Bartlett's Test of Sphericity Approx. Chi-Square 299.316

df 45

Sig. .000

KMO test is used to check the adequacy of the sample size and the KMO value is 0.561 which is an
indicator that the sample size is sufficient to run this analysis (>=0.50- higher the KMO value better is
the result)

Barlette’s Test: this test is used to check whether the variables are related to each other. In table p-
value is less than 0.05 which signifies that variable are related to each other.

Total Variance Explained


Extraction Sums of Squared Rotation Sums of Squared
Initial Eigenvalues Loadings Loadings
Componen % of Cumulative % of Cumulative % of Cumulative
t Total Variance % Total Variance % Total Variance %

1 2.617 26.171 26.171 2.617 26.171 26.171 2.216 22.160 22.160

2 1.653 16.534 42.704 1.653 16.534 42.704 1.566 15.660 37.820

3 1.296 12.960 55.664 1.296 12.960 55.664 1.545 15.454 53.274

4 1.283 12.826 68.490 1.283 12.826 68.490 1.522 15.216 68.490

5 .946 9.460 77.950

6 .610 6.099 84.049

7 .546 5.457 89.506

8 .472 4.716 94.222

9 .378 3.785 98.007

10 .199 1.993 100.000

Extraction Method: Principal Component Analysis.


Here the cumulative variance percentage is 68.490% (>60%), hence we can say that data is both
adequate & reliable and the scale is appropriate for the further analysis.

Furthermore, from the scree plot we can see that the number of points above the Eigen value = 1
are 2, hence the given data is further classified into 2 factors.

Component Matrixa
Component
Social Fear Accessibility Pricing

Enhancement in social status .787

Influential People Smoking .761

Peer Pressure .625

Ban .625 .519

Images on cigarette pack .693

Anti smoking warnings on Digital .660 .445


Platform

Awareness .566

Ease of Access .474 .687

Availability of Substitutes .425 -.612


Increase in Price .511 -.607

Extraction Method: Principal Component Analysis.


a. 4 components extracted.

Rotated Component Matrixa


Component
Social Fear Accessibility Pricing

Enhancement in social status .912

Influential People Smoking .822

Peer Pressure .651

Anti smoking warnings on Digital .846


Platform

Images on cigarette pack .762

Awareness -.421 .439

Ease of Access .854

Ban .712 .453

Availability of Substitutes .788

Increase in Price .772

Extraction Method: Principal Component Analysis.

Rotation Method: Varimax with Kaiser Normalization.


a. Rotation converged in 5 iterations.

Steps: Steps Performed: SPSS: Analyze > Data Reduction > Descriptive > KMO & Bartlett > Continue

Extraction > Continue

Rotation > Varimax

Options > Sorted of Size & Suppress = 0.4

OK
Interpretation - From the above two tables we can see the that the entire data is categorised into
four factors namely social, fear, accessibility & pricing and we will be using rotated matrix as it has
higher degree of involvement in the factors and clearer bifurcation into factors. It can be said that
the four factors i.e. social, fear, accessibility & pricing has 22.16%, 15.66%, 15.454% & 15.216%
relevance to the analysis

5)

Coefficients

Unstandardized Standardized Collinearity


Coefficients Coefficients Statistics

Model B Std. Error Beta t Sig. Tolerance VIF

1 (Constant) 72.876 8.407 8.668 .000

Price .104 .043 .240 2.421 .032 .684 1.462

CurbWeight .000 .003 -.022 -.192 .851 .496 2.017

Horsepower .086 .014 .834 6.322 .000 .385 2.597

a. Dependent Variable: Speed

The equation will be Speed = Constant (72.876) + .104 *(price) +0.00*(Curb Weight) + .086*(Horse
power)

Delete any independent variable which is contributing insignificantly and provide your
recommended estimated regression equation

The independent variable that is contributing insignificantly is Curb Weight with significance level of
0.851 which should ideally be less that n 0.050. With removal of Curb Weight, the equation will
become

Speed = Constant (72.876) + .104 *(price) + .086*(Horse power)

Develop a standardized residual plot. Does the pattern of the residual plot appear to be
reasonable?
The plot is left skewed that is negatively skewed this happens when mode>median>mean

Does the that contain outliers

There are no values above +2.5 or below -2.5 hence there are no outliers.

Do the data contain any influential observation


Model Summary

Mode R
l R Square Adjusted R Square Std. Error of the Estimate

1 .959a .920 .900 2.58286

a. Predictors: (Constant), Horsepower, Price, CurbWeight

b. Dependent Variable: Speed

The correlation is high (0.959) the adjusted R square .900 gives 90.9 % variance is explained.

Coefficients

Unstandardized Standardized Collinearity


Coefficients Coefficients Statistics

Model B Std. Error Beta t Sig. Tolerance VIF

1 (Constant) 72.650 2.655 27.359 .000

Horsepower .097 .010 .934 9.808 .000 1.000 1.000

2 (Constant) 71.328 2.248 31.731 .000

Horsepower .084 .009 .816 9.080 .000 .768 1.302

Price .107 .039 .246 2.736 .017 .768 1.302

a. Dependent Variable: Speed

Horse power and Price are contributing significantly. Horse power is contributing more than price as
indicated by standardized beta coefficient. There is no multicollinearity as indicated by the VIF values
being less than 10. Curb weight is not contributing significantly to the speed.

6
Q1) Which is your preferred area of specialisation?

Ans: - Marketing

- Finance

- Data Analytics

- HR

- Operations

Q2) Which type of industry sector are you targeting?

Ans: - Banking/Financial Services/Insurance

- IT

- Manufacturing

- Conglomerate/FMCG/Healthcare

- Consulting

Q3) Expected CTC as per college standards & comparison with similar level colleges?

Ans: - 5 LPA - 8 LPA

- 8 LPA - 12 LPA

- 12 LPA - 16 LPA

- 16 - LPA - 20 LPA

- 20+ LPA

Q4) What assistance do you seek from the placement cell to prepare you for the placement process?
(multiple choice)

Ans:

- Generic How to deal with Interview session

- Technical knowhow session

- Resume building session

- Personality building/Grooming session

- Communication skill upgradation session


Q5) How often do you expect training sessions(mix of 5 training sessions listed in the above
question) to be conducted?

Ans: - 1-2 times a year

- 2-5 times a year

- 5-8 times a year

Q6) Should there be more guest lectures conducted by industry experts to share corporate know-
how of technicals?

Ans: - Yes, it is important

- No, does not add significant value

Q7) How often do you expect the placement cell to conduct a CV cycle for update your CVs in a year?

Ans: - Once

- Twice

- Thrice

Q8) How satisfied are you with the current conduct of the placement cell?

Ans: - Extremely Satisfied

- Very Satisfied

- Satisfied

- Dissatisfied

- Very Dissatisfied

Q9) Placement Cell improve in the following areas (multiple choice)

Ans: - Industrial Exposure (Live Projects, Guest Lectures)

- Quality of companies for placements

- Average/Median CTC

- Process Conduct

- Training Sessions Quality

- Training Sessions Quantity

You might also like