Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 3

MBOO35 – Legal Aspects of Business

Assignment Set - 1
MBA – III SEM

Question-1 What do you mean by free consent? Under what circumstances consent is considered as
free? Explain.

Ans- :- In human resources or industrial/organizational psychology, 360-degree feedback, also known


as multi-rater feedback, multisource feedback, or multisource assessment, is feedback that comes from all
around an employee. "360" refers to the 360 degrees in a circle, with an individual figuratively in the
center of the circle. Feedback is provided by subordinates, peers, and supervisors. It also includes a self-
assessment and, in some cases, feedback from external sources such as customers and suppliers or other
interested stakeholders. It may be contrasted with "upward feedback," where managers are given
feedback by their direct reports, or a "traditional performance appraisal," where the employees are most
often reviewed only by their managers.

The results from 360-degree feedback are often used by the person receiving the feedback to plan
training and development. Results are also used by some organizations in making administrative
decisions, such as pay or promotion. When this is the case, the 360 assessment is for evaluation purposes,
and is sometimes called a "360-degree review." However, there is a great deal of controversy as to
whether 360-degree feedback should be used exclusively for development purposes, or should be used for
appraisal purposes as well (Waldman et al., 1998). There is also controversy regarding whether 360-
degree feedback improves employee performance, and it has even been suggested that it may decrease
shareholder value (Pfau & Kay, 2002).

History

The German Military first began gathering feedback from multiple sources in order to evaluate
performance during World War II (Fleenor & Prince, 1997). Also during this time period, others explored
the use of multi-rater feedback via the concept of T-groups.

One of the earliest recorded uses of surveys to gather information about employees occurred in the 1950s
at Esso Research and Engineering Company (Bracken, Dalton, Jako, McCauley, & Pollman, 1997). From
there, the idea of 360-degree feedback gained momentum, and by the 1990s most human resources and
organization development professionals understood the concept. The problem was that collecting and
collating the feedback demanded a paper-based effort including either complex manual calculations or
lengthy delays. The first led to despair on the part of practitioners; the second to a gradual erosion of
commitment by recipients.

Multi-rater feedback use steadily increased in popularity, due largely to the use of the Internet in
conducting web-based surveys (Atkins & Wood, 2002). Today, studies suggest that over one-third of U.S.
companies use some type of multi-source feedback (Bracken, Timmereck, & Church, 2001a). Others
claim that this estimate is closer to 90% of all Fortune 500 firms (Edwards & Ewen, 1996). In recent
years, Internet-based services have become the norm, with a growing menu of useful features (e.g., multi
languages, comparative reporting, and aggregate reporting) (Bracken, Summers, & Fleenor, 1998).
ACCURACY

1
A study on the patterns of rater accuracy shows that length of time that a rater has known the person being
rated has the most significant effect on the accuracy of a 360-degree review. The study shows that subjects
in the group “known for one to three years” are the most accurate, followed by “known for less than one
year,” followed by “known for three to five years” and the least accurate being “known for more than five
years.” The study concludes that the most accurate ratings come from knowing the person long enough to
get past first impressions, but not so long as to begin to generalize favorably (Eichinger,
2004).

It has been suggested that multi-rater assessments often generate conflicting opinions, and that there may
be no way to determine whose feedback is accurate (Vinson, 1996). Studies have also indicated that self-
ratings are generally significantly higher than the ratings of others (Lublin, 1994; Yammarino & Atwater,
1993; Nowack, 1992).

RESULTS

Several studies (Hazucha et al., 1993; London & Wohlers, 1991; Walker & Smither, 1999) indicate that
the use of 360-degree feedback helps people improve performance. In a 5-year Walker and Smither (1999)
study, no improvement in overall ratings was found between the 1st and 2nd year, but higher scores were
noted between 2nd and 3rd and 3rd and 4th years. A study by Reilly et al. (1996) found that performance
increased between the 1st and 2nd administrations, and sustained this improvement 2 years later.
Additional studies show that 360 feedback may be predictive of future performance (Maylett &
Riboldi,2007).

Some authors maintain that 360 processes are much too complex to make blanket generalizations about
their effectiveness (Bracken, Timmreck, Fleenor, & Summers, 2001b; Smither, London, & Reilly, 2005).
Smither et al. (2005) suggest, "We therefore think that it is time for researchers and practitioners to ask,
'Under what conditions and for whom is multisource feedback likely to be beneficial?' (rather than asking
'Does multisource feedback work?') (p. 60)." Their meta-analysis of 24 longitudinal studies looks at
individual and organizational moderators that point to many potential determinants of behavior change,
including positive feedback orientation, positive reactions to feedback, goal setting, and taking action.

Bracken et al. (2001b) and Bracken and Timmreck (2001) focus on process features that are likely to also
have major effects in creating behavior change and offer best practices in those areas. Some of these
factors have been researched and been shown to have significant impact. Greguras and Robie (1998)
document how the number of raters used in each rater category (direct report, peer, manager) affects the
reliability of the feedback, with direct reports being the least reliable and therefore requiring more
participation. Multiple pieces of research (Bracken & Paul, 1993; Kaiser & Kaplan, 2006; Caputo & Roch,
2009; English, Rose, & McClellan, 2009) have demonstrated that the response scale can have a major
effect on the results, and some response scales are indeed better than others. Goldsmith and Underhill
(2001) report the powerful influence of the participant behavior of following up with raters to discuss their
results. Other potentially powerful moderators of behavior change include how raters are selected,
manager approval, instrument quality (reliability and validity), rater training and orientation, participant
training, manager (supervisor) training, coaching, integration with HR systems, and accountability
(Bracken et al., 2001b).

Others indicate that the use of multi-rater assessment may not improve company performance. A
2001 Watson Wyatt study found that 360-degree feedback was associated with a 10.6 percent decrease in
market value. Others claim that "there is no data showing that [360-degree feedback] actually improves
productivity, increases retention, decreases grievances, or is superior to forced ranking and standard
performance appraisal systems. It sounds good, but there is no proof it works." (Pfau & Kay, 2002)

2
Similarly, Seifert, Yukl, and McDonald (2003) state that there is little evidence that the multi-rater process
results in change.

Additional studies (Maylett, 2005) found no correlation between an employee's multi-rater assessment
scores and his or her top-down performance appraisal scores (provided by the person's supervisor), and
advised that although multi-rater feedback can be effectively used for appraisal, care should be taken in its
implementation (Maylett, 2009). This research suggests that 360-degree feedback and performance
appraisals get at different outcomes, and that both 360-degree feedback and traditional performance
appraisals should be used in evaluating overall performance.

You might also like