Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

452265

ng et al.Human Resource Development Review


HRD11410.1177/1534484312452265Wa

Human Resource Development Review

The “Science of HRD 11(4) 500­–520


© 2012 SAGE Publications
Reprints and permission:
Research”: Reshaping sagepub.com/journalsPermissions.nav
DOI: 10.1177/1534484312452265
HRD Research Through http://hrd.sagepub.com

Scientometrics

Greg G. Wang1, Jerry W. Gilley1, and Judy Y. Sun1

Abstract
We explore opportunities for assessing and advancing Human Resource Development
(HRD) research through an integrative literature review of scientometric theories and
methods. Known as the “science of science,” scientometrics is concerned with the
quantitative study of scholarly communications, disciplinary structure and assessment
and measurement of research impact. The integrative review of scientometric
literature showed importance in evaluating HRD research and publications, including
citation analysis, citing behavior analysis, and Social Science Citation Index (SSCI)
journal quality control process.We discuss three major implications for engaging HRD
scholars in evaluating and assessing HRD research and scholarly communications for
the quality control and self-regulation of HRD research.

Keywords
scientometrics, citation analysis, research quality assessment, HRD research impact,
SSCI

With several decades’ continued efforts in theory building and research, Human
Resource Development (HRD) has evolved into a system of integrated discipline. This
system currently includes (a) a leading research association, the Academy of Human
Resource Development (AHRD), (b) a suite of four journals, Advances in Developing
Human Resources (ADHR), Human Resource Development International (HRDI),

1
The University of Texas at Tyler, Tyler, TX, USA

Corresponding Author:
Greg G. Wang, Department of Human Resource Development and Technology, College of Business and
Technology, The University of Texas at Tyler, 3900 University Blvd., Tyler, TX 75799, USA
Email: wanggreg@aol.com
Wang et al. 501

Human Resource Development Review (HRDR), and Human Resource Development


Quarterly (HRDQ), (c) three international research conferences held annually in three
continents, the Americas, Asia, and Europe, and (d) an increased number of universi-
ties offering HRD degrees around the world. Although research on evaluating HRD
and training interventions in organizational settings has been a hot topic for decades
(Wang & Spitzer, 2005), the assessment of HRD research quality and associated affect
has yet to receive adequate attention in the literature. Recent advances in scientomet-
rics and related technologies have made it possible for scholars to evaluate and assess
HRD research. This article is an initial effort in promoting the assessment of HRD
research and related communication by applying scientometrics for continuing strength-
ening HRD research. Scientometrics, known as the “science of science” (Moravcsik,
1984), is concerned with the quantitative study of scientific communication (Geisler,
2005). Its basic tenet is that knowledge creation and control is amenable to, thus equally
subject to, measurement and evaluation (Leydesdorff, 2001).

Research Question and Significance


We seek to address the following research question: How HRD research may benefit
from scientometric analysis? The purpose is to explore opportunities for assessing and
advancing HRD research and engage scholars in the measurement and evaluation of
HRD research activities for quality control and self-regulation. Applying scientometic
analysis for HRD research is likely reshape HRD research in the following aspects.
First, it may improve our understanding on the state of HRD research. Through
scientometric research, HRD knowledge structure, research topics and evolving fronts,
as well as HRD journal quality, can be effectively identified and assessed. It is also an
effective way to understand and determine the overall quality and maturity level of
HRD research. Second, applying scientometrics to HRD research offers potential to
guide future HRD research through analysis of scholarly communications between
HRD and related disciplines. This may further develop HRD’s unique identity as a
discipline while contributing to the overall knowledge base. Third, scientometrics has
long been considered a self-regulated control mechanism for the quality of scholarly
journals (Taborsky, 2007), particularly when the peer-reviewed process is considered
“flawed” on multiple counts (Smith, 2006, p. 178). HRD journals may benefit from
scientometric analysis for improvement. In short, applying scientometrics for HRD
research may help answer the following questions: (a) How do we know HRD research
has made an impact on scholarly communities? (b) What is the quality of HRD
research in relation to other related disciplines?
Given the vast literature and wide range of methods covered in the scientometric
research, it is necessary for the review to be selective. We thus focus on the literature
that can be applied for the assessment and evaluation of HRD research. In what fol-
lows, we first review the evolution of scientometrics as an interdisciplinary field of
study, followed by two major areas of research on citation analysis and citing behavior
analysis. We further discuss the role of Social Science Citation Index (SSCI) as a
502 Human Resource Development Review 11(4)

mechanism for the quality and effect of journal performance and highlight the latest
advances in scientometric methods and indices in relation to journal performance and
scholarly research impact. Lastly, we discuss the implications and future research
opportunities for HRD research.

Scientometrics: Evolution and Scope


Scientometrics was originated in the 1920s. Gross and Gross (1927) proposed that
data about citation rates be used by librarians to make acquisition decisions on aca-
demic journals for libraries with limited budget. Concurrently, Lotka (1926) derived
productivity distribution among scientists, and was later known as Lotka’s law. The
“Law of Scattering” further described mathematically how literature on a particular
subject was distributed in academic journals (Bradford, 1950). Garfield (1955)
extended the research and pioneered a “citation index” as a “bibliographic system for
science literature that can eliminate the uncritical citation of fraudulent, incomplete,
or obsolete data by making it possible for the conscientious scholar to be aware
of criticisms of earlier papers” (p. 108). Proposing the concept of “impact factor”
(p. 109), Garfield suggested that a citation count of publications be more efficient than
counting the number of publications for scholars’ productivity.
Built on Garfield’s (1955) work, Price (1965) made it possible for scientometrics to
become a discipline through analyzing massive citation data. Through quantitative
modeling, this study unveiled how scientific networks were connected through pub-
lished articles in natural sciences. It also discovered that citation-based analysis was
able to identify the “nature of the scientific research fronts” (p. 149) for any disci-
plines. Later, Zipf’s (1972) law of word frequency was associated with the frequency
distribution predicted by the Bradford’s law. Subsequently, the Pratt Index (Pratt,
1977) was developed to measure the degree to which research publications on a given
subject were concentrated within a collection of journals, referred to as the Bradford-
Zipf Distribution (Bulick, 1978). Price (1976) further proposed a unifying theory for all
the statistical “laws” in scientometrics to explain diverse behaviors such as article scat-
ter, the frequency of keyword use, and a particular journal’s accumulated citation count.
Over time, scientometrics has evolved into an established interdisciplinary field of
research that can be applicable to all natural and social sciences disciplinary research.
Its scope covers three major areas. First, it examines a discipline’s general system
development, disciplinary structure and interrelations (Leydesdorff, 2001) and explores
the research frontier dynamics (Price, 1965). Second, it focuses on the process of
knowledge production, including quantitative assessment of intellectual potential
(Braun, Glänzel, & Schubert, 1985) and communications in research (Geisler, 2005;
Leydesdorff, 2001), research productivity, and evaluation of scholars or institutions
(Narin, 1976), research collaboration, and the structure of research communities and
networks (Price, 1965). Third, it studies the macroenvironment of scientific research—
science policy (Garfield, 1972; Moe, Burger, Frankfort, & van Raan, 1985), innova-
tion processes (Panne, 2007), and globalized knowledge production (McMillan &
Wang et al. 503

Hamilton, 2000). To this end, understanding and applying the three areas of sci-
entometrics for HRD research may facilitate further development and maturity of
the field.

Citation Analysis
Knowledge creation involves more than research. Without communications, previous
findings would be lost, and scholars would have to reinvent the wheels. In practice,
research must be communicated through peer-reviewed publications (Schoonhaert &
Roelants, 1996). Research publications disseminate new findings and invite peers to
use or criticize. When scholars accept or reject a published result, they indicate it in
their writings in the form of a citation.
By definition, a citation is the reference of a published work by a more recent
work. The latter is the “citing” item, and the one receiving the citation is the “cited”
item. In scientometric analysis, citation counts are used as raw data and considered
“unobtrusive measures that do not require the cooperation of a respondent and do not
themselves contaminate the response (i.e., they are non-reactive)” (Smith, 1981,
p. 84). As such, citation analysis is a method of exploring the structure and the inter-
relationship of a discipline. It can be used to examine the following three phenomena:
(a) direct citation, (b) citation coupling, and (c) cocitation. Because scientometrics
originated from analyzing citation counts (Gross & Gross, 1927), research on direct
citation has been a critical component for understanding who cite what and why.
Citation coupling analysis is to study the sharing of one or more references by two or
more later studies (Small, 1982; Van den Besselaar & Leydesdorff, 1996). Cocitation
analysis is to examine the pattern in which two or more earlier items are jointly cited
by the later literature (Small, 1982). Tsai and Wu (2010) have provided a recent
example of cocitation and coupling analyses for knowledge structure in management
research. The analysis generally involves in counting the number of citations occur or
co-occur, providing insights on author cocitation, journal cocitation, or keyword coc-
itation. Apparently, changes in coupling and cocitation patterns over time in a disci-
pline can offer insight for understanding the process of research evolution (Tsai &
Wu, 2010).
Citation analyses have been explored at different levels for various purposes,
including assessing and evaluating national science policies and disciplinary develop-
ment (Oppenheim, 1997; Tijssen, van Leeuwen, & van Raan, 2002), departments and
research laboratories (Bayer & Folger, 1966; Narin, 1976), books and journals
(Garfield, 1972), and individual scholars (Cole & Cole, 1973). In these studies, cita-
tions counts in peer-reviewed publications were used to measure the effect of the cited
items on related research topics. The rationale is that the cited publications are likely
to generate more responses from scholars (van Raan, 2005). In the literature, citation
analysis is often associated with two competing research paradigms: The normative
theory and the social constructivist theory of citing behavior. Both are rooted in
broader social theories of science.
504 Human Resource Development Review 11(4)

The Normative Theory


Following Merton’s (1988) sociology of science, normative theory states that scholars
(should) acknowledge research contributions of their colleagues by citing their work.
Hence, citations signify intellectual or cognitive influence of earlier scholarly work.

The reference serves both instrumental and symbolic functions in the trans-
mission and enlargement of knowledge. Instrumentally, it tells us of work we
may not have known before, some of which may hold further interest for us;
symbolically, it registers in the enduring archives the intellectual property of
the acknowledged source by providing a pellet of peer recognition of the
knowledge claim, accepted or expressly rejected, that was made in that
source. (p. 622)

Furthermore, the cognitive symbol connecting citing scholars to an earlier work


may be examined through content analysis in the citation context (Small, 1982). For a
given set of citing publications, the percent uniformity, or the degree to which citing
scholars demonstrate consensus on the nature of the cited concept, can be estimated to
identify the ideas symbolized by the cited items. Studies have adopted this approach
to characterize the concept symbolizing nature of cited works by exploring the content
in the citation context (Small, 1982). Because of the intellectual and cognitive influ-
ence that can be ascribed to a citation, the normative theory considers evaluative cita-
tion analysis appropriate for the assessment of research quality (van Raan, 2005).

The Social Constructivist Theory


The social constructivist orientation in citation analysis is to explain citing behavior
based on the constructivist sociology of science (Knorr-Cetina, 1981). It questions the
normative theory and the validity of evaluative citation analysis and argues that the
cognitive aspect of publications has little influence on how they are received.
Scholarly knowledge is socially constructed under the influence of political and finan-
cial resources and the use of rhetorical devices (Knorr-Cetina, 1981). Thus, citations
cannot be adequately described unidimensionally through the intellectual content of the
publication itself. Scholars have different motivations for citation under given intel-
lectual and practical environment, which are socially constructed. Citing is an aid to
convince the audience that the knowledge claims in a new research represent an
advance in previous research (Gilbert, 1977). For this purpose, scholars tend to cite
published work that they assume the audience will regard as “authoritative” (Moed &
Garfield, 2004).
Cole (1992) aligns the two theoretical orientations with two concepts: Local knowl-
edge outcomes (LKOs) and communal knowledge outcomes (CKOs). A LKO is pro-
duced in a particular context and may be influenced by social processes. A CKO is the
work that is accepted by the research community as important and correct; and it may
Wang et al. 505

be generalized as independent of social process and environment. At the microlevel,


LKO-based research may be conducted in specific social context and through a series
of social processes. In this sense, the content of science is socially constructed. At the
macrolevel with CKO, in phases that “normal science” is conducted, the normative
theory of science may prevail. As such, core knowledge may be characterized by virtu-
ally universal consensus, and scholars can accept this knowledge as given and take it
as a starting point for the research on local knowledge (Cole, 1992).
Empirical studies have examined the two theoretical orientations (Baldi, 1998;
Stewart, 1983). Examining the normative and social constructivist processes in the allo-
cation of citations with a network-analytic model, Baldi (1998) has identified significant
positive effects of cited article cognitive content and cited article quality on the probabil-
ity of citations. It has confirmed a normative interpretation of the allocation of citations
in which citations reflect the acknowledgement of intellectual contributions. However, it
does not support the social constructivist view that citations are rhetorical tools of per-
suasion (Baldi, 1998). In reviewing and reflecting on the theoretical development of
scientometrics, Cronin (2005b) has concluded that

The weight of empirical evidence seems to suggest that scientists typically cite
the works of their peers in a normatively guided manner, and that these signs
(citations) perform a mutually intelligible communicative function. (p. 1508)

In addition, citation counts are often considered to have predictive power. Garfield
(1986) has found that the expected affect of Nobel Prize winners on the scientific com-
munity is reflected significantly in their citation records long before they receive the
prizes. In recent years, citation rates are becoming increasingly important in judging
the research quality of journals, institutions and departments, as well as individual
faculty members (Klein & Chiang, 2004).
Another related area is co-word analysis. This type of analysis assumes that an
article’s keywords symbolize the core content of its research topic and represent its
link with the research problems. One or more of concurrent keywords in articles con-
stitutes a link among studies (Tsai & Wu, 2010). The presence of many co-occurrences
around the same keyword(s) points to a locus of strategic alliance within studies that
may correspond to a research topic. Co-word analysis thus reveals patterns and trends
in a discipline through the associated strengths in the disciplines. In other words, co-
word analysis visualizes the intellectual structure of a discipline into maps of the con-
ceptual space, thus a time series of maps produces a trace of changes in the intellectual
domain (Ding, Chowdhury, & Foo, 2001).

Citing Behaviors Analysis


According to Smith (1981), the use of citation counts as an indicator of research
impact is appropriate only if the following premises hold: (a) A citation means that the
citing author has used the cited item; (b) a citation reflects the quality of that item; and
506 Human Resource Development Review 11(4)

(c) citations are made to the best possible works. However, the central problem of
citation counts for measuring research impact is that standards and conventions of
citation are not precisely formalized (Cronin, 1982). A citation itself cannot disclose
sufficient understanding on why exactly an author cites a certain item; neither can it
reflect what specific aspect the author actually refers to in the cited item (Brooks,
1986). Therefore, exploring citing behaviors and factors determining when and why
scholars cite others has been an area of study. Scientometricians have examined dif-
ferent citing behaviors in various disciplines. We summarize three major representa-
tives of taxonomy on citing behaviors in Table 1.
The literature has also explored factors that influence citing behaviors in various disci-
plines from multiple dimensions, including (a) time, (b) disciplinary, (c) journal, (d) arti-
cle, and (e) author/reader dimensions (Bornmann & Daniel, 2008). With the passage of
time and increases in research output, citations become more frequent, and it is expected
to see citations for more recent publications than to older ones because more research
advances are available (Cawkell, 1976). Also, the more frequently a publication is cited in
the past, the more frequently it will be cited in the future. Research has shown that the
expected number of future citations is a linear function of the current citations (Burrell,
2003; Cano & Lind, 1991). Furthermore, studies have revealed that citing behaviors and
practices vary among disciplines (Braun, Glänzel, & Grupp, 1995a, 1995b; Hargens,
2000; Hurt, 1987) and even within different areas of a subdiscipline (Klamer & van
Dalen, 2002). In some disciplines, scholars cite recent literature more often than in others
(Peters & van Raan, 1994). As the possibility of being cited is associated with the number
of journals in a discipline (Moed, Burger, & Frankfort, 1985), smaller disciplines would
generate fewer citations than more general fields (King, 1987). This is likely to be the case
for HRD compared to psychology or management.
The citation of an article is found to be dependent on the frequency of publication
of journals containing related articles (Stewart, 1983). The order of an article arranged
in a journal issue also affects the citation rate (Smart & Waldfogel, 1996). A journal’s
accessibility, visibility, and internationality as well as the quality or prestige influence
the probability of citations (Boyack & Klavans, 2005; Moed et al., 1985; Yue &
Wilson, 2004). Methodology or conceptual and literature review pieces tend to attract
more citations than empirical studies (Cano & Lind, 1991; MacRoberts & MacRoberts,
1996). We perceive this is likely to be the case for HRDR given its editorial policy on
conceptual and review articles.
Positive correlations have been identified between citation frequency and the num-
ber of coauthors, the length of an article, and the references listed in an article (Baldi,
1998; Boyack & Klavans, 2005). The article dimension is linked to the author/reader
dimension. As citations are often affected by social networks, authors tend to cite works
by those with whom they are personally acquainted (White & McCain, 1998). This
results in greater reciprocal exchange of citations over time (Cronin, 2005a). In the
HRD literature, although these phenomena can be observed, empirical investigation
may help understand the sociology of HRD research network and its relationship with
knowledge production.
Wang et al. 507

Table 1. Taxonomy of Citing Behaviors

Author Taxonomy
Garfield (1962) •  Paying homage to pioneers;
•  Giving credit or homage to related work;
•  Identifying methodology;
•  Providing background reading;
•  Correcting earlier work;
•  Criticizing previous work;
•  Substantiating claims;
•  Alerting to forthcoming work;
• Providing leads to poorly disseminated, indexed, or
uncited work;
•  Authenticating data and classes of fact;
•  Identifying origin of an idea or concept;
• Disclaiming work or ideas of others (negative
claims);
• Disputing priority claims of others (negative
homage; p. 85).
Moravcsik and Murugesan (1975) •  Conceptual versus. operational citations,
•  Evolutionary versus. juxtapositional citations,
•  Category, organic versus. perfunctory citations,
•  Confirmative versus. negational citations,
• Valuable versus. redundant citations.
Bornmann and Daniel (2008) • Affirmational: For confirming, supporting, agreeing
of cited items; Assumptive: for assumed knowledge
or background in cited items; Conceptual: for
conceptual or theoretical support;
•  Contrastive: for contracting or alternatives;
•  Methodological: for methodological support;
• Negational: for questioning, correcting or negatively
evaluating cited items;
• Perfunctory: citing without additional comment, or
a redundant reference, or citing is not apparently
relevant to the author’s immediate concerns;
• Persuasive: citing as a “ceremonial fashion,” or to a
recognized authority in the field.
Note: Summarized from Bornmann and Daniel (2008), Garfield (1962), and Moravcsik and Murugesan
(1975).

Problems in Citing Behavior and Citation Analysis


An important assumption for citation analysis is that scholars will cite sources that
influence their research, and they ensure that cited items are of better and higher qual-
ity among all citables (Borgman & Furner, 2002; Cronin, 2005b; Smith, 1981; Van
Raan, 2005). Yet, the literature has found that the assumption does not always hold in
508 Human Resource Development Review 11(4)

reality because of the complexity of, and unregulated, citing behaviors that can make
analysis problematic or even distort citation analysis (MacRoberts & MacRoberts,
1996, 2010; Radicchi & Castellano, 2012). Of the problems identified, we highlight
those likely to be encountered in HRD research and publications.
First, because of the oversight or lack of awareness, scholars may not cite most
influential and relevant items in their work (Garfield, 1980; MacRoberts & MacRoberts,
1996). Second, biased citing is frequently found in published items. MacRoberts and
MacRoberts (1987) uncovered that only 37% of 13 facts were cited correctly in 93
citations in a study of the history of genetics, and the citing was highly biased. Notably,
an important cause of citing bias revealed in the same study is secondary source cita-
tions. Of the citations analyzed,

38% were to secondary sources; that is, over one-third of the “credit” given was
taken from the discoverer and allotted to someone who had nothing to do with
the discovery. (p. 344)

A third problem is perceived excessive self-citing. It was estimated this type of cita-
tions varying from 10% to 30% of the overall citations depending on disciplines
(MacRoberts & MacRoberts, 1996). Through text analysis and expert interview, Hyland
(2003) has found that self-citing “reflect(s) both the promotional strategies of individu-
als and the epistemological practices of their disciplines” (p. 251). More recently, in
scrutinizing four psychological journals published in 2006 and 2007, Brysbaert and
Smyth (2011) have reported 0% to 45% of self-citation and concluded that it is “a self-
serving bias motivated by self-enhancement and self-promotion” (p.129).
All three problems may exist in the HRD research. The first problem sometimes
may be addressed through peer-review process. The second problem, especially cred-
iting to a wrong source through secondary referencing can be observed. The self-citing
in HRD is also not uncommon. Studies exploring these issues in the HRD publications
may enhance our awareness of the status and severity for conscientious research rigor.
The issue of multiple authorship. How to allocate appropriate credit among multiple
coauthors affects the accuracy and credibility of citation analysis on scholarly impact.
Traditional practices have been (a) exclusive counting—giving the full credit to the
first author (MacRoberts & MacRoberts, 1996), (b) inflationary counting—distributing
the full credit repeatedly to all coauthors, or (c) equalizing counting—dividing the
credit among all coauthors with equal fraction (Eggert, 2011; Hagen, 2008). Given its
importance on research ethics and scholarly performance, the literature has been explor-
ing solutions in two areas. One is to advocate changing “authorship” to “contributor-
ship” for “disclosing to the reader of every participant’s contribution to the work and to
the manuscript” (Rennie, Yank, & Emanuel, 1997, p. 582). Guidelines have been pro-
posed to make the contributions of coauthors more specific and transparent in addition
to coauthor ranking (Eggert, 2011; Frazzetto, 2004). However, these guidelines are not
widely known and the practices have been rare. The second area is to develop analyti-
cal tools for allocating credit of coauthorship. Recently, a p-index has been proposed
Wang et al. 509

to give credit according to authorship rank and number of coauthors (Prathap, 2011).
It is to “combine fractional credit simultaneously on a paper and citation basis for each
paper to compute the fractional value” for each coauthor (p.240). The p-index measure
is still based on the assumption of “contributorship” and has not been empirically
tested nor widely adopted.

The Social Science Citation Index (SSCI)


To be listed by the SSCI has become a synonym of quality for all social science jour-
nals, particularly when citation-based SSCI journals become a key measure for insti-
tutional decisions about appointments, promotions, salaries, resources, and awards
(Klein & Chiang, 2004). A similar trend is also emerging in the HRD community
(Russ-Eft, 2008; 2010). Whether a journal can be included in SSCI depends on the
following: (a) Meeting its own publication schedule; (b) Maintaining international
editorial conventions. “These conventions include informative journal titles, fully
descriptive article titles and abstracts, complete bibliographic information for all cited
references, and full address information for every author” (Testa 2008, p. 72). (c) Being
peer reviewed (Garfield 1990; Testa 2008). (d) Having broad geographic representa-
tion among the authors of the journal and of the articles cited (Garfield 1990; Testa
2008); and (e) Circularity in terms of citation by other SSCI journals (Garfield 1979,
1990; Testa 2008).
It is important to note an unfortunate historical fact from a science policy perspec-
tive. During the earlier years, the U.S. National Science Foundation (NSF) missed an
opportunity that might have made (Social) Science Citation Index [(S)SCI] a public
service. From 1954 to 1960, failing to foresee the strategic importance of scientomet-
rics in evaluating and assessing research, NSF rejected Garfield’s funding proposal
requesting US$5,900 for a period of two years. The proposal was to conduct further
research and create a publically accessible citation index (Lederberg, 2001). The rejec-
tions had forced Garfield to commercialize the idea that created the SCI and the SSCI in
the 1960s. As such, it may not be necessary to question why academic communities
should follow the “policing” of a for-profit corporation (e.g., Klein & Chiang, 2004).
However, the literature has expressed concerns on SSCI’s practice in the following
areas: (a) Selection criteria: It is observed that (S)SCI does not apply the criteria across
all journals consistently. For example, a list of nonpeer reviewed magazines is included
in SSCI, such as the Nation, New Republic, Commentary, and Fortune magazines
(Klein & Chiang, 2004). (b) Disciplinary biases: Due to differences in citation prac-
tices and citing convention, citations in different disciplines may cause biased measure
for the impact factor (MacRoberts & MacRoberts, 2010). (c) Authorship merit: SSCI
gives equal credit to citations of coauthors. Although the cited articles in SSCI are
listed by first author only, this does not prevent citation analysis from accrediting these
citation counts to all coauthors. This has encouraged increasing numbers of publica-
tions with multiple co-authors (Schoonbaert & Roelants, 1996). (d) Length of articles:
Emphasis on citations caused authors to reduce the length of content into the least
510 Human Resource Development Review 11(4)

publishable unit for the purpose of driving up citations (Maddox, 1989). (e) Self-
citation: It is accepted, up to a limit, that self-citation can be justified (e.g., demonstrat-
ing a track record, or authority of, research). But when it surpasses the acceptable
limit, it should be discounted. Yet, the Institute for Scientific Information (ISI) has not
specified the “acceptable limit” and sometimes used 20% as a loose measure (Taubes,
1993).
Nonetheless, (S)SCI has gained popularity and an authoritative position on
indexing quality journals and has been generally regarded as the “gold standard
for databases offering indexing in the social sciences” (Bedeian, Van Fleet, &
Hyman, 2009; p. 216). It has received praises widely as represented by the follow-
ing quote:

The development of the Science Citation Index represented a fundamental


breakthrough in scientific information retrieval. What began as a commercial
product . . . has evolved into a sophisticated set of conceptual tools for under-
standing the dynamics of science. The concept of citation analysis today forms
the basis of much of what is known variously as scientometrics, bibliometrics,
infometrics, cybermetrics, and webometrics. Garfield’s invention continues to
have a profound impact on the way we think about and study scholarly com-
munication. (White & McCain, 1998, p. 328)

Journal Quality and the Impact Factor (IF)


Journal quality is conventionally measured by a citation-based journal impact factor
(JIF). The standard JIF is derived based on a period of two year. It may also be derived
for any time period. There are two elements included in the computation: (a) the
numerator, the number of cites in a given period to any items published by the journal;
and (b) the denominator, the number of substantive articles published in the same
period. Journal Citation Report (JCR) publishes annual JIFs to give more weight to
rapidly changing fields (Garfield, 1997). For example, a yearly JIF is calculated by
dividing the number of current year citations (e.g., 2011) to a journal’s articles pub-
lished in the previous two years (e.g., 2009 and 2010) by the combined total of these
articles. The design of JIF includes a component of measuring editorial performance
and judgment. For the denominator, it only includes peer-reviewed articles defined as
substantive items and excludes nonsubstantive items such as editorials, interviews,
book reviews etc. Yet, among the nonsubstantives, editorials often cite items pub-
lished in the journals, and attract citations. These citations are added to the numerator
for the JIF. Thus, the more an editorial cites its own articles or produces citations, all
other things being equal, the greater the resulting JIF.
In relation to the JIF, a number of other metrics is also used to measure journal
performance by SSCI, including citation density and half-life. Citation density is the
average number of references cited per article. The half-life index is the number of
journal publication years, going back from the current, which account for 50% of the
Wang et al. 511

total number of citations received by the cited journals (Garfield, 2006). The half-1ife
index is an indicator of the rate at which a journal’s articles become obsolete. This, in
turn, may reflect the rate of obsolescence of knowledge in a journal’s subject area. For
example, if a journal has a half-life of 4. It means that half of the citations the journal
receives in a given year is to articles published during the previous four years. The
remaining years’ citations to the journal are dispersed among all the articles it pub-
lishes since the journal is founded. Research has found that these metrics are disciplin-
ary specific (Garfield, 1999).
Of the many controversies about the JIF, Hoeffel (1998) explained the reason for its
popularity:

Experience has shown that in each specialty the best journals are those in which
it is most difficult to have an article accepted, and these are the journals that
have a high impact factor. Most of these journals existed long before the impact
factor was devised. The use of impact factor as a measure of quality is wide-
spread because it fits well with the opinion we have in each field of the best
journals in our specialty. (p. 1225)

Recent Advances and Trends in Scientometrics


With the advancement of research and technology, more content and information
resources are available online and more connections exist among publications.
Scientometrics presents unprecedented opportunities to be applied in new ways and
address new research questions (Bornmann & Furner, 2002). Traditionally, the main
source for citation analysis has been ISI’s database. The new developments make ISI
no longer the only game in town. Lately, new citation databases such as Scopus
(www.scopus.com), Google Scholar, and CiteSeer (citeseer.ist.psu.edu) have emerged.
This opens up new opportunities for research and related measurement and can be
readily adopted for HRD research.

Developments in Research Performance Measures


Scientometrics has witnessed a number of breakthroughs on measuring research per-
formance during the past few years as represented by an influential metric, h-index
(Hirsch, 2005). A scholar or a journal has an index h if h of the overall published
articles, denoted as Np, has at least h citations each and the remaining (Np—h) articles
have fewer than or equal to h citations each (Hirsch, 2005). For instance, an h-index
of 6 means that 6 of the unit’s (person or journal’s) overall publications have been
cited at least 6 times each. The advantage of h-index is that it incorporates both quan-
tity, the number of publications, and quality, the citations received, into one single
indicator. While developed by a physicist, h-index has quickly been accepted by all
disciplines applying scientometric for research performance (Braun, Glänzel, &
Schubert 2006). However, the disadvantage of h-index was also noted as not being
512 Human Resource Development Review 11(4)

sensitive to one or more highly cited publications of the unit, thus likely to underesti-
mate a journal or a scholar’s contribution (Egghe, 2006).
Egghe (2006) derives a g-index that offers “an improvement of the h-index to mea-
sure the global citation performance of a set of articles” (p. 131). The g-index is
defined as, given a set of articles ranked in decreasing order of citations that a journal
or a scholar receives, it is the unique largest number such that the top g articles received
together at least g2 citations (Egghe, 2006). The g-index has gained popularity in the
literature because of its obvious advantage. Also, to complement h-index’s lacking of
consideration on time-related factor, Sidiropoulos, Katsaros, and Manolopoulos
(2007) derived a Contemporary h-index (hc-index). Retaining the advantages of origi-
nal h-index, the hc-index adds an age-related weight to each cited article, giving less
weight to older articles.
Most recently, a new complement, e-index was offered by Zhang (2009) to address
the following weakness of the h-index. In its current form h-index cannot account for
excess citations beyond the resulting needed h-score for different scholars or journals.
This means that using h-index for a group of researchers or journals, they may either
obtain an identical h-index with different citation counts. Or it is likely that a scholar
or a journal may have received more citations distributed among multiple publica-
tions, but obtaining a lower h-score than others. The e-index assumes the unit under
study has at least h2 citations, but uses those excess citations that have not been used
for the e-index calculation. The e-index has received acceptance in the matter of
months (Dodson, 2009).
Furthermore, a measure known as eigenfactor, has been introduced to exclusively
and comprehensively assess journal performance (Bergstrom, 2007). Through a math-
ematical algorithm, eigenfactor uses an iterative ranking scheme to identify and rank
journals, which are cited by other high ranking ones. It ranks journals by measuring
“the total influence of a journal on the scholarly literature or, comparably, the total
value provided by all of the articles published in that journal in a year” (p. 315).
Eigenfactor creates immediate effect on scientometrics research and has been incorpo-
rated into JCR since 2007 (Fersht, 2009).

New Tools and Data Sources


Recent development in information technology has provided unrestricted access to
new data sources for scientometrics analysis in almost all aspects for all disciplines.
As SSCI used to be the only citation database tracking SSCI-indexed journal, the new
alternatives allow scholars to analyze journals not included in SSCI. Such analysis
may help editors gauge the timing for SSCI application or prepare for supporting
documents. For example, a new open-access software, Publish or Perish (PoP),
designed by Harzing (2010) is available to worldwide research communities. This
software incorporates most recent advances in metrics of citation analysis discussed
above. It extracts citation data through Google Scholar’s (GS) advanced search func-
tion and produces all relevant indices and allows users to perform both journal and
Wang et al. 513

individual scholar research impact analysis (Harzing & van der Wal, 2008). The soft-
ware is well received by the worldwide scholarly communities (Harzing. 2010).
The advances in assessing and evaluating research and the improved convenience
in data sources and tools provide opportunities for HRD scholars for research and
assessment. We discuss the implications and future research in HRD in relation to
potential applications.

Implications and Future Research in HRD


Thus far, the HRD literature has hardly incorporated scientometric approaches in evaluat-
ing research communications with only a few exceptions. The absence of scientometric
analysis in HRD may be caused by the following factors. First, as an emerging field,
HRD research has been in an evolving process of identifying its theoretical and empirical
bases (Swanson, 1999). The size of citation raw data for such analysis has been in an
accumulating process. Second, the AHRD community has limited exposure to sciento-
metric research and methods in the past. Recently, the AHRD leadership has emphasized
the importance of research quality and related analysis (Russ-Eft, 2008, 2010). Third, it
may be caused by a lack of awareness on the importance of self-regulation and assess-
ment regarding the quality and rigor of HRD research. Some may consider that peer-
reviewed publications imply the acceptance of the research quality thus there is no need
for additional evaluative analysis.
Scientometrics can be naturally linked to improve HRD research. First, our review
showed an imperative need for applying scientometric analysis for HRD research.
HRD literature has accumulated sufficient citations for an in-depth understanding of
the disciplinary interactions and the process of knowledge creation. It is also at an
appropriate stage for HRD journal impact analysis on the role of HRD journals in
knowledge dissemination and related impact. This is particularly important when a
number of HRD journals are pursuing SSCI status with the recent success of HRDQ
after its 20 years of pursuit. In fact, such effort has emerged in the literature. Jo, Jeung,
Park, and Yoon (2009) analyzed the citation network among the four HRD publica-
tions from 1990 to 2007. Recently, through a citation and content analysis, Jeung,
Yoon, Park, and Jo (2011) identified top 20 most cited HRD articles by journals out-
side the AHRD community, revealing three key themes that have contributed to the
overall knowledge base: (a) training transfer and evaluation, (b) learning in organiza-
tions, and (c) knowledge sharing and knowledge creation.
Second, as shown by other disciplines, applying scientometrics for HRD research
can help understand HRD research development and maturity status. Through analyz-
ing citation network, keywords co-occurrences, and related patterns around research
topics, we may identify existing HRD knowledge structure and evolving fronts. A
recent study offers a relevant example on management learning and education (MLE)
scholarship, a shared research area with HRD (Wang, Rothwell, & Sun, 2009). In
exploring the legitimacy of MLE scholarship, Rynes and Brown (2011) examined four
MLE journals’ contribution to the current state of MLE research with multiple
514 Human Resource Development Review 11(4)

indicators of scholarly legitimacy. Particularly, the study used citation data from
Google Scholar to measure the consequential legitimacy, and used editorial board
members’ citation rates and h-index for the leadership legitimacy. The study have
derived important future directions for MLE scholars and editors to advance the legiti-
macy of MLE scholarship (Rynes & Brown, 2011).
Future research may further explore the interactions of HRD research topics and
knowledge structure. For example, literature has debated on what constitutes a founda-
tion of HRD (Swanson, 1999). Yet it has not been clear about the acceptance of the
three legged stool versus. the centipede model. With a citation and/or co-word analysis,
one may identify the relevancy and the influence of the models. The outcome of such
studies is likely to derive new theoretical perspectives. Similar research may also be
conducted on HRD definitional research and other critical areas (Wang & Sun, 2009).
Last but not the least, understanding scientometric analysis is essential for HRD
scholars conducting relevant, rigorous, and productive research. As citations are to
give credit to the cited items, scholars are required to cite carefully based on the con-
tent, context, and the relevance of cited studies and avoid citing secondary references
to credit wrong sources. Yet, it is not uncommon that some HRD articles cite literature
out of the context and relevance to justify an argument. Self-regulated citing practice
can not only contribute to the quality of research, but also add to the citation raw data
for meaningful scientometric analysis at a later time.
Performing regular self-assessment of citation analysis has been proved to be an
effective strategy for individual scholars’ career development (Dodson, 2009). For
HRD scholars, self-citation analysis and related h, e, and g scores can indicate one’s
research impact beyond the number of publications. It can offer a quantifiable support
for career development, such as tenure, promotion, or annual performance reviews. It
may also provide insight and guidance for effectively allocating personal resources.
For example, if one finds that a published work has not been cited for an extended
period of time, it may indicate a time to develop a new research topic unless one has
confidence that it will become a late citation buster know as sleeping beauty (Garfield,
1996). For doctoral students or junior scholars, general citation analysis may also offer
insight during the process of developing one’s research agenda and priority.

Declaration of Conflicting Interests


The author(s) declared no potential conflicts of interest with respect to the research, authorship,
and/or publication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of
this article.

References
Baldi, S. (1998). Normative versus social constructivist processes in the allocation of citations:
A network-analytic model. American Sociological Review, 63, 829-846.
Wang et al. 515

Bayer, A. E., & Folger, J. (1966). Some correlates of a citation measure of productivity in sci-
ence. Sociology of Education, 39, 381-390.
Bedeian, A. G., Van Fleet, D. D., & Hyman, H. H. III. (2009). Scientific achievement and edito-
rial board membership. Organizational Research Methods, 12, 211-238.
Bergstrom, C. (2007). Eigenfactor: Measuring the value and prestige of scholarly journals. Col-
lege & Research Libraries News, 68, 314-316.
Bornmann, C. L., & Daniel, H. (2008). What do citation counts measure? A review of studies
on citing behavior. Journal of Documentation, 64, 45-80. doi:10.1108/00220410810844150
Borgman, C. L., & Furner, J. (2002). Scholarly communication and bibliometrics. Annual
Review of Information Science and Technology, 36, 3-72.
Boyack, K. W., & Klavans, R. (2005). Predicting the importance of current papers. In
P. Ingwersen, & B. Larsen, (Eds.), Proceedings of the 10th international conference of
the international society for scientometrics and informetrics (pp. 196-203). Stockholm,
Sweden: Karolinska University Press.
Bradford, S. C. (1950). Documentation. Washington, DC: Public Affairs Press.
Braun, T. Glänzel, W., & Schubert, A. (1985). Scientometric indicators: A 32 country com-
parative evaluation of publishing performance and citation impact. Philadelphia, PA: World
Scientific.
Braun, T., Glänzel, W., & Grupp, H. (1995a) The scientometric weight of 50 nations in 27 sci-
ence areas, 1989-1993. 1. All fields combined: mathematics, engineering, chemistry and
physics. Scientometrics, 33, 263-293.
Braun, T., Glänzel, W., & Grupp, H. (1995b) The scientometric weight of 50 nations in 27 sci-
ence areas, 1989-1993. 2. Life sciences. Scientometrics, 34, 207-237.
Braun, T. Glänzel, W., & Schubert, A. (2006). A Hirsch-type index for journals. Scientometrics,
69, 169–173. doi:10.1007/s11192-006-0147-4
Brooks, T. A. (1986). Evidence of complex citer motivations. Journal of the American Society
for Information Science, 37, 34-36.
Brysbaert, M., & Smyth, S. (2011). Self-enhancement in scientific research: The self-citation
bias. Psychologica Belgica, 51(2), 129-137.
Bulick, S. (1978). Book use as a Bradford-Zipf phenomenon. College and Research Libraries,
39, 215-218.
Burrell, Q. L. (2003). Predicting future citation behavior. Journal of the American Society for
Information Science and Technology, 54, 372-378. doi:10.1002/asi.10207
Cano, V., & Lind, N. C. (1991). Citation life-cycles of 10 citation-classics. Scientometrics, 22,
297-312.
Cawkell, A. E. (1976). Documentation note—citations, obsolescence, enduring articles, and
multiple authorships. Journal of Documentation, 32, 53-58.
Cole, J. R., & Cole, S. (1973). Social stratification in science, Chicago, IL: The University of
Chicago Press
Cole, S. (1992). Making science: Between nature and society, Cambridge, MA: Harvard Uni-
versity Press.
Cronin, B. (1982). Norms and functions in citation—the view of journal editors and referees in
psychology. Social Science Information Studies, 2, 65-78.
516 Human Resource Development Review 11(4)

Cronin, B. (2005a). The hand of science: Academic writing and its rewards. Lanham, MD:
Scarecrow Press.
Cronin, B. (2005b). A hundred million acts of whimsy? Current Science, 89, 1505-1509.
Ding, Y., Chowdhury, G. G., & Foo, S. (2001). Bibliometric cartography of information retrieval
research by using co-word analysis. Information Processing & Management, 37, 817-842.
doi:10.1016/S0306-4573(00)00051-0
Dodson, D. V. (2009). Citation analysis: Maintenance of h-index and use of e-index. Biochemical
and Biophysical Research Communications, 387, 625-626. doi:10.1016/j.bbrc.2009.07.091
Eggert, L. D. (2011). Best practices for allocating appropriate credit and responsibility to
authors of multi-authored articles. Front Psychology, 2, 196. doi:10.3389/fpsyg.2011.00196
Egghe, L. (2006). Theory and practice of the g-index. Scientometrics, 69(1), 131-152.
doi:10.1007/s11192-006-0144-7
Fersht, A. (2009). The most influential journals: Impact factor and eigenfactor. Proceedings
of National Academy of Sciences of USA, 106, 6883-6884. doi:10.1073/pnas.0903307106s
Frazzetto, G. (2004). Who did what? Uneasiness with the current authorship is prompting
the scientific community to seek alternatives. European Molicular Biology Organization
Reports, 5, 446-448.
Garfield, E. (1955). Citation indexes for science. A new dimension in documentation through
association of ideas. Science, 133(3159), 108-111.
Garfield, E. (1962). Can citation indexing be automated? Essays of an Information Scientist
(Vol. 1, pp. 84-90). Philadelphia, PA: ISI Press
Garfield, E. (1972). Citation analysis as a tool in journal evaluation—journals can be ranked by
frequency and impact of citations for science policy studies. Science, 178, 471-479.
Garfield, E.(1980). Essays of an Information Scientist. Vol. 3. Philadelphia, PA.: ISI Press.
Garfield, E. (1986). Do Nobel Prize winner write citation classics? Current Content, 23, 3-8.
Garfield, E. (1990). How ISI selects journals for coverage: Quantitative and qualitative consid-
erations. Current Contents, 22(28), 5-13.
Garfield, E. (1996). How can impact factors be improved? British Medical Journal, 313, 411-413.
Garfield, E. (1997). Dispelling a few common myths about journal citation impacts. Scientist,
11(3), 11-15.
Garfield, E. (1999). Journal impact factor, a brief review. Canadian Medical Association Jour-
nal, 161, 979-911.
Garfield, E. (2006). The history and meaning of Journal Impact Factor. JAMA, 295(1), 90-93.
Geisler, E. (2005). The measurement of scientific activity: research directions in linking phi-
losophy of science and metrics of science and technology outputs. Scientometrics, 62,
269-284. doi:10.1007/s11192-005-0020-x
Gilbert, G. N. (1977). Referencing as persuasion. Social Studies of Science, 7, 113-122.
Gross, P. L. K., & Gross E. M. (1927). College libraries and chemical education. Science, 66,
385-389.
Hagen, N. T. (2008). Harmonic allocation of authorship credit: Source-level correction of bib-
liometric bias assures accurate publication and citation analysis. PLoS ONE, 3(12), e4021.
doi:10.1371/journal.pone.0004021
Wang et al. 517

Hargens, L. L. (2000). Using the literature: reference networks, reference contexts, and the
social structure of scholarship. American Sociological Review, 65, 846-865.
Harzing, A.W. (2010). Publish or perish, version 3.0. Retrieved from www.harzing.com/pop
.htm
Harzing, A., & van der Wal, R. (2008). Google scholar as a new source for citation analysis?
Ethics in Science and Environmental Politics, 8(1), 62-72. doi:10.3354/esep00076
Hirsch, J. E. (2005). An index to quantify an individual’s scientific output, Proceedings of
the National Academy of Sciences of the United States of America, 102, 16569-16572.
doi:10.1073/pnas.0507655102
Hoeffel C. (1998). Journal impact factors. Allergy, 53, 1225. doi:10.1111/j.1398-9995.1998.
tb03848.x
Hurt, C. D. (1987). Conceptual citation differences in science, technology, and social sciences
literature. Information Processing & Management, 23, 1-6.
Hyland,K. (2003) Self-citation and self-reference: Credibility and promotion in academic pub-
lication. Journal of the American Society for Information Sicence and Technology, 54(3),
251-259.
Jeung, C. W., Yoon, H. J., Park, S., & Jo, S. J. (2011). The contributions of human resource
development research across disciplines: A citation and content analysis. Human Resource
Development Quarterly, 22(1), 87-109. doi:10.1002/hrdq.20062
Jo, S. J., Jeung, C., Park, S., & Yoon, H. (2009). Who is citing whom: citation network analysis
among HRD publications from 1990 to 2007. Human Resource Development Quarterly,
20(4), 503-537. doi:10.1002/hrdq.20023
King, J. (1987). A review of bibliometric and other science indicators and their role in research
evaluation. Journal of Information Science, 13, 261-276.
Klamer, A., & van Dalen, H. P. (2002) Attention and the art of scientific publishing. Journal of
Economic Methodology, 9, 289-315. doi:10.1080/1350178022000015104
Klein, D. B., & Chiang, E. (2004). Citation counts and SSCI in personnel decisions: A survey of
economics departments. Economics Journal Watch, 1, 166-174.
Knorr-Cetina, K. (1981) The Manufacture of knowledge: An essay on the constructivist and
contextual nature of science Oxford, UK: Pergamon.
Lederberg, J. (2000). How the science citation index got started. In Cronin, B., Atkins, H. B.
(Eds.), The web of knowledge (pp. 25-64). Medford, New Jersey: American Society for
Information Science.
Leydesdorff, L. (2001). The challenge of scientometrics: The development, measurement, and
self-organization of scientific communications (2nd ed). Leiden, Netherlands: Universal.
Lotka, J. A. (1926). The frequency distribution of scientific productivity. Journal of the Wash-
ington Academy of Sciences, 16, 217-223.
Maddox, J. (1989). Is the salami sliced too thinly? Nature, 342, 733.
MacRoberts, M. H., & MacRoberts, B. R. (1987). Another test of the normative theory of citing.
Journal of the American Society for Information Science, 38, 305-306.
MacRoberts, M. H. & MacRoberts, B. R. (1996). Problems of citation analysis. Scientometrics,
36, 435-444. doi:10.1007/BF02129604
518 Human Resource Development Review 11(4)

MacRoberts, M. H., & MacRoberts, B. R. (2010). Problems of citation analysis: A study of


uncited and seldom-cited influences. Journal of the American Society for Information Sci-
ence and Technology, 61(1), 1-13.
McMillan, G. S., & Hamilton, Robert D. (2000). Using bibliometrics to measure firm knowl-
edge: An analysis of the US pharmaceutical industry. Technology Analysis & Strategic Man-
agement, 12, 465-475. doi:10.1080/09537320020004168
Merton, R. K. (1988) The Matthew effect in science, II: Cumulative advantage and the symbol-
ism of intellectual property. ISIS, 79, 606-623.
Moed, H. F., & Garfield, E. (2004). In basic science the percentage of “authoritative” references
decreases as bibliographies become shorter. Scientometrics, 60, 295-303. doi:10.1023/
B:SCIE.0000034375.39385.84
Moed, H. F., Burger, W. J. M., Frankfort, J. G., & van Raan, A. F. J. (1985). The use of biblio-
metric data for the measurement of university research performance. Research Policy, 14,
131-149.
Moravcsik, M. J. (1984). Life in a multidimensional world. Scientometrics, 6(2), 75-86.
Moravcsik, M. J., & Murugesan, P. (1975). Some results on the function and quality of citations.
Social Studies of Science, 5(1), 86-92. doi:10.1177/030631277500500106
Narin, F. (1976). Evaluative bibliometrics: The use of publication and citation analysis in the
evaluation of scientific activity, Cherry Hill, NJ: Computer Horizons.
Oppenheim, C. (1997). The correlation between citation counts and the 1992 research assess-
ment exercise ratings for British research in genetics, anatomy and archaeology. Journal of
Documentation, 53, 477-487.
Peters, H. P. F., & van Raan, A. F. J. (1994). On determinants of citation scores—a case study in
chemical engineering. Journal of the American Society for Information Science, 45, 39-49.
Prathap, G. (2011). The fractional and harmonic p-indices for multiple authorship. Scientomet-
rics, 86, 239-244. doi:10.1007/s11192-010-0257-x
Pratt A D. (1977). A measure of class concentration in bibliometrics. Journal of American Soci-
ety for Information Science, 28, 285-292.
Price, D. J. (1965). Networks of scientific papers. Science, 149, 510-515.
Price, D. J. (1967). Nations can publish or perish. Science and Technology, 70, 84-90.
Radicchi F, & Castellano C (2012) A Reverse engineering approach to the suppression of cita-
tion biases reveals universal properties of citation distributions. PLoS ONE, 7, e33833.
doi:10.1371/journal.pone.0033833
Rennie, D., Yank, V., & Emanuel, L. (1997). When authorship fails. A proposal to make con-
tributors accountable. Journal of the American Medical Association, 278, 579-585.
Russ-Eft, D. (2008). SSCI, ISI, JCR, JIF, IF, and journal quality. Human Resource Development
Quarterly, 19, 185-189. doi:10.1002/hrdq.1235
Russ-Eft, D. (2010). Presidential address. The 2010 International Research Conference in
Americas, AHRD, St. Paul, Minnesota.
Rynes, S. L., & Brown, K. G. (2011). Where are we in the “long march to legitimacy?” Assess-
ing scholarship in management learning and education. Academy of Management Learning
& Education, 10, 561-582.doi:10.5465/amle.2009.0084
Wang et al. 519

Schoonhaert, D., & Roelants, G. (1996). Citation analysis for measuring the value of scientific
publications: Quality assessment tool or comedy of errors? Tropical Medicine & Interna-
tional Health, 1, 739-752. doi:10.1111/j.1365-3156.1996.tb00106.x
Sidiropoulos, A. Katsaros, D., & Manolopoulos, Y. (2007). Generalized h-index for disclos-
ing latent facts in citation networks. Scientometrics, 72, 253-280. doi:10.1007/s11192-007-
1722-z
Small, H. (1982). Citation context analysis. In B. Dervin & M. Voigt (Eds.), Progress in com-
munication science (Vol. 3, pp. 287-310). Norwood, NJ: Ablex.
Smart, S., & Waldfogel, J. (1996). A citation-based test for discrimination at economics and
finance journals. (NBER Working Paper, No. 5460). Cambridge, MA: National Bureau of
Economic Research.
Smith, L. C. (1981). Citation Analysis. Library Trends, 30, 83-106.
Smith, R. (2006). Peer review: a flawed process at the heart of science and journals. British
Journal of the Royal Society of Medicine, 99, 178-182.
Stewart, J. A. (1983). Achievement and ascriptive processes in the recognition of scientific
articles. Social Forces, 62, 166-189.
Swanson, R. A. (1999). HRD theory, real or imagined? Human Resource Development Interna-
tional, 2(1), 2-5.
Taborsky, M. (2007). Impact factor statistics and publication practice: What can we learn?
Ethology, 113, 1-8. doi:10.1111/j.1439-0310.2007.01329.x
Taubes, G. (1993). Measure for measure in science: How citation analysis and Science Watch,
its primary showcase, are turning science into a numbers game and stirring mixed feeling
among researchers? Science, 260, 884-886.
Testa, J. (2008). The Thompson Scientific journal selection process. Contributions to Science,
4 (1), 69-73.
Tijssen, R. J. W., van Leeuwen, T. N., & van Raan, A. F. (2002). Mapping the scientific per-
formance of German medical research. An international comparative bibliometric study,
Stuttgart, Germany: Schattauer.
Tsai, W., & Wu, C. (2010). Knowledge combination: A cocitation analysis. Academy of Man-
agement Journal, 53, 441-450.
Van den Besselaar, P., & Leydesdorff, L. (1996). Mapping Change in scientific specialties:
A scientometric reconstruction of the development of artificial intelligence. Journal of
the American Society for Information Science, 47, 415-436. doi:10.1002/(SICI)1097-
4571(199606)
van Raan, A. F. J. (2005). For your citations only? Hot topics in bibliometric analysis. Measure-
ment: Interdisciplinary Research and Perspectives, 3, 50-62.
Wang, G. G., Rothwell, W. J., & Sun, J. Y. (2009). Management development in China: A policy
analysis. International Journal of Training and Development, 13, 205-220.
Wang, G. G., & Spitzer, D. R. (2005). HRD measurement & evaluation: Looking back and
moving forward. Advances in Developing Human Resources, 7(1), 6-15. doi:10.1177/
1523422304272077
Wang, G. G., & Sun, J. Y. (2009). Clarifying the boundaries of human resource development.
Human Resource Development International, 12(1), 93-103.
520 Human Resource Development Review 11(4)

White, H. D., & McCain, K. W. (1998). Visualizing a discipline: An author co-citation analysis
of information science, 1972-1995. Journal of the American Society for Information Sci-
ence, 49, 327-356.
Yue, W., & Wilson, C. S. (2004). Measuring the citation impact of research journals in clini-
cal neurology: A structural equation modelling analysis. Scientometrics, 60, 317-332.
doi:10.1023/B:SCIE.0000034377.93437.18
Zhang, C. T. (2009). The e-index, complementing the h-index for excess citations, PLoS ONE,
5, e5429. doi:10.1371/journal.pone.0005429
Zipf, G. K. (1972). Human behavior and the principle of least effort. New York, NY: Hafner.

Bios
Greg G. Wang, PhD, is a professor in human resource development in the College of Business
and Technology at The University of Texas at Tyler. He is also the editor of Journal of Chinese
Human Resource Management.

Jerry W. Gilley, EdD, is a professor and the chair of the Department of Human Resource
Development and Technology in the College of Business and Technology at The University of
Texas at Tyler. He has served as the president of Academy of Human Resource Development.

Judy Y. Sun, PhD, is an assistant professor in human resource development in the College of
Business and Technology at The University of Texas at Tyler. Her research interest is in career
development, HRD theory building research, and economic analysis of HRD.

You might also like