Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

BRM NOTES UNIT 4 , 5

Ques 1 Independent & dependent variable ?

 Ans Independent Variable: This is the variable that is manipulated or controlled by the
experimenter. It's called "independent" because its variation doesn't depend on other variables
in the experiment. In other words, changes in the independent variable are believed to cause
changes in the dependent variable.
 Dependent Variable: This is the variable that is observed or measured. Its variation depends on
the independent variable. In other words, it's the outcome variable that changes in response to
the manipulation of the independent variable.

Example:

Let's say you're conducting an experiment to investigate how the amount of sunlight affects plant
growth.

Independent Variable: The amount of sunlight.

Dependent Variable: Plant growth.

Mediator Variable:
Mediator variables explain the relationship between two other variables. They help to understand the
mechanism or process through which one variable influences another.

Moderator Variable:

Moderator variables influence the strength or direction of the relationship between two other variables.
They indicate when or for whom the relationship between the independent and dependent variables
holds true.

Mediator Example:

In a study on the relationship between exercise and weight loss, researchers find that exercise leads to
improved metabolism, which in turn results in weight loss. Here, metabolism acts as the mediator
variable, explaining how exercise influences weight loss.

Moderator Example:
In a study on the effects of technology use on academic performance, researchers find that the
relationship between time spent on smartphones and grades is stronger for high school students
compared to college students. Here, academic level (high school vs. college) acts as the moderator,
influencing the strength of the relationship between technology use and academic performance based
on the students' academic stage.
Development of questionnaire :

 Purpose: Define your research objectives clearly.


 Audience: Identify who will be answering the questionnaire.
 Question Types: Choose between closed-ended, open-ended, Likert scale, etc.
 Draft Questions: Keep questions clear, unbiased, and organized logically.
 Pilot Test: Test with a small sample to identify and fix any issues.
 Finalize: Make adjustments based on pilot test feedback.
 Distribute: Administer the finalized questionnaire to your target audience.
 Analysis: Analyze collected responses for insights relevant to your objectives.
 Ensure ethical considerations are maintained throughout the process.
 Avoid implicit assumptions.
 Use ladder approach & funnel approach.

Ladder & funnel approach :

The ladder approach and funnel approach are two strategies for structuring questions in a questionnaire
or survey. Here's a brief explanation of each with an example:

Ladder Approach:

The ladder approach involves starting with general, broad questions and then progressively narrowing
down to more specific or detailed questions. It's like climbing a ladder, moving from a broader
perspective to a more focused one.

Example:

Suppose you're conducting a customer satisfaction survey for a restaurant. You might start with a
general question like, "How satisfied are you with your overall dining experience?" Then, based on the
response, you could follow up with more specific questions about food quality, service, ambiance, etc.
Funnel Approach:

The funnel approach begins with specific questions and gradually broadens to more general inquiries.
It's like pouring liquid into a funnel, starting with a narrow opening and widening out.

Example:

Let's say you're conducting a survey about smartphone usage habits. You might begin with specific
questions like, "How many hours per day do you spend using social media apps on your smartphone?"
Then, you could broaden the scope with questions like, "What activities do you primarily use your
smartphone for?" Finally, you might conclude with a general question such as, "Overall, how satisfied
are you with your smartphone experience?"

Sequencing:

 Arranges questions in a logical order.


 Guides respondents through the survey smoothly.
 Typically follows a structured flow from general to specific topics.

Random Questions:

 Presents questions in a random order.


 Minimizes order effects and biases.
 Each respondent sees questions in a different sequence, reducing response pattern biases.

Both approaches are important for effective questionnaire design, with sequencing providing structure
and flow, while randomization helps to reduce response biases.

PARAMETER – TEST (T –TEST):


The T-Test is statiscal method used to determine if there is a significant difference between the means
of two groups.

It is parameter test that assume the data is normally distributed.

1.Independent Samples T-Test:

This test is used to compare the means of two independent groups to determine if they are significantly
different from each other.

Example:

Suppose you want to compare the exam scores of two different classes (Class A and Class B) to see if
there's a significant difference in their performance. You would collect exam scores from each class and
then use an independent samples t-test to analyze the data and determine if the mean scores of the two
classes are statistically different.

2.Single Sample T-Test:

This test is used to determine whether the mean of a single sample differs significantly from a known
population mean or hypothesized value.

Example:

Imagine you're a teacher and you want to determine if your students' average score on a test is
significantly different from the national average score. You collect test scores from your students and
then use a single sample t-test to compare their mean score to the national average score to see if
there's a significant difference.

3.Paired Samples T-Test:

This test is used when you have two sets of scores that are related in some way, such as pre-test and
post-test scores for the same group of participants.

Example:

Suppose you're conducting a study on the effectiveness of a new teaching method. You administer a
pre-test to measure students' knowledge before the teaching method is implemented, then you
implement the teaching method, and finally, you administer a post-test to measure their knowledge
again. You would use a paired samples t-test to compare the pre-test and post-test scores and
determine if there's a significant difference in knowledge levels after implementing the new teaching
method.

4. ANOVA (Analysis of Variance) is a statistical test used to compare the means of three or more
independent groups to determine if there are significant differences between them. Here's a simple
explanation with an example:

One-Way ANOVA:

In a one-way ANOVA, you have one independent variable (with three or more levels or groups) and one
dependent variable.
Example:Let's say you're studying the effect of different types of fertilizer on plant growth. You have
three different types of fertilizer: A, B, and C. You want to know if there are any significant differences in
plant growth among the three fertilizer types.

Group A: Plants treated with fertilizer A (n=10, mean growth = 15 cm)

Group B: Plants treated with fertilizer B (n=10, mean growth = 18 cm)

Group C: Plants treated with fertilizer C (n=10, mean growth = 20 cm)

To conduct a one-way ANOVA, you would follow these steps:

Formulate hypotheses:

 Null hypothesis (H0): There is no significant difference in plant growth among the three fertilizer
types.
 Alternative hypothesis (H1): There is a significant difference in plant growth among the three
fertilizer types.

TYPE 1 & TYPE 2 ERRORS:

Type I and Type II errors are concepts associated with hypothesis testing, especially in the context of
significance testing. Here's a brief explanation of each along with a short example:

Type I Error:

Type I error occurs when you reject a true null hypothesis. In other words, you conclude that there is a
significant effect or difference when, in reality, there is no such effect or difference.

Example:

Suppose you are conducting a medical study to test a new drug's effectiveness in treating a certain
disease. The null hypothesis (H0) states that the drug has no effect, while the alternative hypothesis (H1)
states that the drug is effective.
Type I Error: Concluding that the drug is effective (rejecting H0) when, in fact, it has no effect (H0 is
true). This would be a false positive result.

Type II Error:

Type II error occurs when you fail to reject a false null hypothesis. In other words, you conclude that
there is no significant effect or difference when, in reality, there is an effect or difference.

Example:

Continuing with the medical study example, suppose the drug is indeed effective in treating the disease,
but your study fails to detect this effectiveness.
Type II Error: Failing to conclude that the drug is effective (failing to reject H0) when, in fact, it is
effective (H0 is false). This would be a false negative result.

In summary, Type I error involves incorrectly rejecting a true null hypothesis, while Type II error involves
failing to reject a false null hypothesis. Both types of errors are important to consider in hypothesis
testing, as they impact the validity of research findings and decision-making processes.

1. Parametric Test:

 Assumes specific characteristics about the population distribution (e.g., normality,


homogeneity of variance).
 Examples include t-test for comparing means between two groups.

2. Non-parametric Test:

 Does not make assumptions about the population distribution.


 Used when data do not meet parametric test assumptions or when measured on an
ordinal or nominal scale.
 Examples include Wilcoxon signed-rank test for comparing paired groups.
MANN –WHITNEY U TEST:

 The Mann-Whitney U test is a non-parametric statistical test used to determine if there is a


significant difference between the distributions of two independent groups.
 Rank all data points from both groups combined, disregarding group membership.
 Calculate the U statistic based on the ranks of one of the groups.
 Compare the calculated U statistic to the critical value from the Mann-Whitney U distribution
table or obtain the p-value.
 Example:
 Comparing the exam scores of two groups, where Group A (median = 75) has significantly higher
scores than Group B (median = 65) with p < 0.05 using the Mann-Whitney U test.

Kruskal – wallis test

Certainly! Here are the key points of the Kruskal-Wallis test summarized in three points:

 Purpose:
 Determines if there are significant differences between the distributions of three or more
independent groups.

Procedure:

 Ranks all data points across groups.


 Calculates the Kruskal-Wallis H statistic.
 Compares the statistic to the critical value or obtains the p-value.

Interpretation:

 Rejects the null hypothesis if p-value < chosen significance level.


 Concludes significant differences between groups.
 Fails to reject the null hypothesis otherwise.

FORMAT OF RESEARCH REPORT :

 Title Page: The title page serves as the cover page of the research report. It typically includes the
title of the research, the author(s) name(s), their affiliations (institution or organization), and the
date of publication or submission.
 Abstract: The abstract is a concise summary of the research paper. It provides an overview of
the research purpose, methodology, key findings, and conclusions. Readers often refer to the
abstract to quickly understand the main points of the study.

 Introduction: The introduction sets the stage for the research by providing background
information on the topic, outlining the research problem or question, stating the objectives or
hypotheses, and explaining the significance or relevance of the study.

 Literature Review: The literature review critically examines existing research and scholarly
articles relevant to the topic of the study. It synthesizes and summarizes previous findings,
identifies gaps in the literature, and provides a theoretical framework or conceptual background
for the current research.

 Methods: The methods section describes how the research was conducted. It includes details
about the research design, such as experimental, correlational, or qualitative methods,
participant characteristics (e.g., sample size, demographics), data collection procedures, and
data analysis techniques.

 Results: The results section presents the main findings of the study. It may include tables,
figures, and descriptive statistics to illustrate and summarize the data collected during the
research. Results should be reported objectively, without interpretation or discussion.

 Discussion: The discussion interprets the results of the study in relation to the research question
or hypotheses. It compares the findings with previous research, discusses any limitations or
biases in the study, and explores the theoretical and practical implications of the results. The
discussion section often ends with suggestions for future research.

 Conclusion: The conclusion provides a summary of the key findings and their implications. It
restates the research objectives or hypotheses and discusses how the study contributes to the
existing body of knowledge in the field. The conclusion may also highlight any practical
applications or recommendations arising from the research.
 References: The references section lists all the sources cited in the research report. It provides
complete bibliographic information for each reference, including authors' names, publication
titles, journal or book titles, publication dates, and page numbers.

 Appendices: The appendices contain supplementary materials that are not essential to the main
body of the research report but provide additional information for interested readers. This may
include raw data, survey instruments, interview transcripts, or detailed statistical analyses.
Appendices are numbered or labeled for easy reference within the text.

OR

1. Title Page: Contains the title, author(s), affiliation, and date.


2. Abstract: Briefly summarizes research purpose, methods, results, and conclusions.
3. Introduction: Gives background, objectives, and hypotheses briefly.
4. Literature Review: Reviews relevant studies and theories.
5. Methods: Describes research design, participants, procedures, and analysis methods.
6. Results: Presents findings with tables, figures, and descriptive stats.
7. Discussion: Interprets results, compares with past research, and discusses implications.
8. Conclusion: Summarizes key findings and their importance briefly.
9. References: Lists all cited sources.
10. Appendices: Includes extra materials like questionnaires or raw data.

Abstract : It is the brief & concise summary of the research project consists:

1.Objective:

Briefly states the purpose or goal of the research, outlining what the study aims to investigate or
achieve.

Tells readers why the research was conducted and what the researchers hoped to find out.

2.Methodology:

Describes the methods or approach used to conduct the research, including study design, data
collection techniques, and analysis procedures.

Explains how the research was carried out, what data was collected, and how it was analyzed.

3.Main Findings:

Summarizes the key results or outcomes of the research, highlighting the most important
findings.
Provides a concise overview of what was discovered or observed during the study.

4.Conclusion:

Offers a brief summary of the conclusions drawn from the research findings, including any
implications or significance.

States the main takeaway points and what the findings mean for the broader topic or field of
study.

You might also like