Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Tema 2

1) What is a research design?

A framework for every stage of the collection and analysis of data.


"A research design provides a framework for the collection and analysis of data"
(p40). The choice of methods to be used is, indeed, very important, as is an
understanding of your fundamental research philosophy. But a research design
will highlight these choices and other decisions about which elements are
considered to be more important than others, as well as your hypotheses about
causality and predictability. Consider it as a blueprint for the research you
propose to conduct. This chapter looks at five different research designs from
which you could choose.
Page reference: 31 (Key Concept 2.1)
2) What is a research method?
A research method is simply a techninue for collecting data. It can involve a
Specific instrument, such as a sen.completion questionnaire or a structured
interview schedule or participant observation whereby the researcher listens to
and watches others.
3) If a study is "reliable", this means that:

the measures devised for concepts are stable on different occasions.


The essential question about research is its reliability. It is often the case that
concepts in the social sciences can be construed differently in different social
contexts, so the promise of repeatability makes readers feel the results can be
relied on more. But what is even more important is that there should be not
much variation (or none at all) in responses to the same instruments by the
same type of respondent. Bryman gives the example of wild fluctuations in IQ
test scores as an indicator of low reliability of the test itself. When reviewing
literature or consulting secondary sources, we are certainly influenced by the
reputation, or simply good standing in the academic community, of the
researcher. This does not imply uncritical acceptance of their findings, however.
Page reference: 41
4) "Internal validity" refers to:

whether or not there is really a causal relationship between two


variables.

"Validity" has a special meaning in research, usually indicating the truth of


something, its authenticity. Many of our research activities can be seen as valid
steps towards producing a dissertation, for example, but our conclusions will not
be worthwhile unless our research was valid. If a measure proves unreliable (see
question 2), it lacks "measurement validity" but "internal validity" is lost when
the "internal" relationship between variables is lost, or ambiguous, or confused.
Typically, we argue that "a" causes "b", but if "b" can actually influence the
value of "a", then the causal relationship suggested doesn't really exist.
Page reference: 42
5) Lincoln and Guba (1985) propose that an alternative criterion for
evaluating qualitative research would be:

Trustworthiness
Most tests of reliability and validity are applicable to quantitative data rather
than to quantitative. Lincoln and Guba (1985) propose "trustworthiness" as an
example of a criterion that could determine how good the qualitative research
might have been. This criterion may be subdivided into dimensions of credibility,
transferability, dependability and confirmability (which Bryman examines in
detail in chapter 16), to act as counterparts for reliability and validity in
quantitative research. It is the view of many that whereas running a focus group,
for example, may be 'messier' than conducting a survey, messiness should not
be a goal of the research!
Page reference: 43
6) Naturalism has been defined as:
- viewing natural and social objects as belonging to the same
realm.
- being true to the nature of the phenomenon under investigation.
- minimising the intrusion of artificial methods of data collection
into the field.

Key concept 2.4 explains that "naturalism" is an unusual expression which has
many meanings, some contradictory! All of the definitions shown in this question
are correct, although "a" is positivist as opposed to the interpretivism suggested
by "b" and "c". However, research methodologies like ethnography, or
observation, or unstructured qualitative interviews try to come close to the
natural context of the data, while being relatively non-intrusive.
7) In an experimental design, the dependent variable is:

the one that is not manipulated and in which any changes are
observed.

When conducting an experiment, it is essential to manipulate one variable,


(conventionally called "independent") so that changes in another (the
dependent variable) can be identified as indicating a causal relationship. There
is nothing ambiguous about this process in the slightest, nor do personal values
intrude. Recalling that many "independent variables" cannot be manipulated in
an actual social context, experimentation may be the only way of getting close
to an identification of a causal relationship between variables.
Page reference: 45, 46
8) What is a cross-sectional design?

The collection of data from more than one case at one moment in time.
This is often called a survey design because researchers using this method may
produce questionnaires to be filled in by many respondents in the same time
period. The search is for variation within a social group, or between social
groups, in attitudes or orientation to specific variables. Since no manipulation of
variables is possible, co-relationships between variables is all that can be
discovered. Answer (d) suggests experimentation; answer (a) thinks of
respondents instead of the design; and answer (b) must be wrong because
researchers are always cheerful and bright. Always!
Page reference: 53, 54 (Key concept 2.12)
9) Survey research is cross-sectional and therefore:

High in replicability but low in internal validity.


A survey attempts to discover the range of responses to a set of variables. The
researcher can give a lot of details concerning procedures for selecting
respondents, handling of the research instrument (perhaps a questionnaire) and
the analysis methodology. In this way, replicability can be almost guaranteed.
However, since the analysis can only pinpoint degrees of co-relation between
variables, causality remains in the realm of inference, meaning low (or no)
internal validity. Remember that internal validity depends on causality and
reliability on replicability.
Page reference: 54, 55 (Key concept 2.13)
10)

Panel and cohort designs differ, in that:

A panel study can distinguish between age effects and cohort effects,
but a cohort design cannot.
Both panel and cohort studies are types of longitudinal design, similar to crosssectional research but conducted over a considerable period of time. Cohorts are
groups of people sharing a characteristic, like age or unemployed status,

whereas panels are typically random samples of the population as a whole. It


follows that a panel study should be able to distinguish between age effects (for
example in the BHPS study) and cohort effects (where being born in the same
time period is the shared characteristic) but the cohort study would only be able
to identify aging effects. Both types of study suffer from attrition, through death
and emigration, for example. Both are quantitative in nature.
Page reference: 58, 59
11)

Cross cultural studies are an example of:

Comparative design
Bryman prefers "to reserve the term 'case study' for those instances where the
'case' is the focus of interest in its own right." The case study design is usually
focused on those aspects which could only have happened at that time,
in that place, for whatever reason. The comparative design typically studies two
contrasting cases, so that a better understanding of social phenomena can be
formed. Clearly, cross-cultural studies are a good example, therefore, of
comparative design in action. If you gave answer (a) you were moving in the
right direction but you need more than one case; if you gave answer (c) you
should go back to question 2 and page 37; answer (d) is also incorrect for
reasons to be found in question 9.
Page reference: 65 (Key concept 2.19)

You might also like