Professional Documents
Culture Documents
Q057 Wang 2012
Q057 Wang 2012
Scientometrics
Abstract
We explore opportunities for assessing and advancing Human Resource Development
(HRD) research through an integrative literature review of scientometric theories and
methods. Known as the “science of science,” scientometrics is concerned with the
quantitative study of scholarly communications, disciplinary structure and assessment
and measurement of research impact. The integrative review of scientometric
literature showed importance in evaluating HRD research and publications, including
citation analysis, citing behavior analysis, and Social Science Citation Index (SSCI)
journal quality control process.We discuss three major implications for engaging HRD
scholars in evaluating and assessing HRD research and scholarly communications for
the quality control and self-regulation of HRD research.
Keywords
scientometrics, citation analysis, research quality assessment, HRD research impact,
SSCI
With several decades’ continued efforts in theory building and research, Human
Resource Development (HRD) has evolved into a system of integrated discipline. This
system currently includes (a) a leading research association, the Academy of Human
Resource Development (AHRD), (b) a suite of four journals, Advances in Developing
Human Resources (ADHR), Human Resource Development International (HRDI),
1
The University of Texas at Tyler, Tyler, TX, USA
Corresponding Author:
Greg G. Wang, Department of Human Resource Development and Technology, College of Business and
Technology, The University of Texas at Tyler, 3900 University Blvd., Tyler, TX 75799, USA
Email: wanggreg@aol.com
Wang et al. 501
mechanism for the quality and effect of journal performance and highlight the latest
advances in scientometric methods and indices in relation to journal performance and
scholarly research impact. Lastly, we discuss the implications and future research
opportunities for HRD research.
Hamilton, 2000). To this end, understanding and applying the three areas of sci-
entometrics for HRD research may facilitate further development and maturity of
the field.
Citation Analysis
Knowledge creation involves more than research. Without communications, previous
findings would be lost, and scholars would have to reinvent the wheels. In practice,
research must be communicated through peer-reviewed publications (Schoonhaert &
Roelants, 1996). Research publications disseminate new findings and invite peers to
use or criticize. When scholars accept or reject a published result, they indicate it in
their writings in the form of a citation.
By definition, a citation is the reference of a published work by a more recent
work. The latter is the “citing” item, and the one receiving the citation is the “cited”
item. In scientometric analysis, citation counts are used as raw data and considered
“unobtrusive measures that do not require the cooperation of a respondent and do not
themselves contaminate the response (i.e., they are non-reactive)” (Smith, 1981,
p. 84). As such, citation analysis is a method of exploring the structure and the inter-
relationship of a discipline. It can be used to examine the following three phenomena:
(a) direct citation, (b) citation coupling, and (c) cocitation. Because scientometrics
originated from analyzing citation counts (Gross & Gross, 1927), research on direct
citation has been a critical component for understanding who cite what and why.
Citation coupling analysis is to study the sharing of one or more references by two or
more later studies (Small, 1982; Van den Besselaar & Leydesdorff, 1996). Cocitation
analysis is to examine the pattern in which two or more earlier items are jointly cited
by the later literature (Small, 1982). Tsai and Wu (2010) have provided a recent
example of cocitation and coupling analyses for knowledge structure in management
research. The analysis generally involves in counting the number of citations occur or
co-occur, providing insights on author cocitation, journal cocitation, or keyword coc-
itation. Apparently, changes in coupling and cocitation patterns over time in a disci-
pline can offer insight for understanding the process of research evolution (Tsai &
Wu, 2010).
Citation analyses have been explored at different levels for various purposes,
including assessing and evaluating national science policies and disciplinary develop-
ment (Oppenheim, 1997; Tijssen, van Leeuwen, & van Raan, 2002), departments and
research laboratories (Bayer & Folger, 1966; Narin, 1976), books and journals
(Garfield, 1972), and individual scholars (Cole & Cole, 1973). In these studies, cita-
tions counts in peer-reviewed publications were used to measure the effect of the cited
items on related research topics. The rationale is that the cited publications are likely
to generate more responses from scholars (van Raan, 2005). In the literature, citation
analysis is often associated with two competing research paradigms: The normative
theory and the social constructivist theory of citing behavior. Both are rooted in
broader social theories of science.
504 Human Resource Development Review 11(4)
The reference serves both instrumental and symbolic functions in the trans-
mission and enlargement of knowledge. Instrumentally, it tells us of work we
may not have known before, some of which may hold further interest for us;
symbolically, it registers in the enduring archives the intellectual property of
the acknowledged source by providing a pellet of peer recognition of the
knowledge claim, accepted or expressly rejected, that was made in that
source. (p. 622)
The weight of empirical evidence seems to suggest that scientists typically cite
the works of their peers in a normatively guided manner, and that these signs
(citations) perform a mutually intelligible communicative function. (p. 1508)
In addition, citation counts are often considered to have predictive power. Garfield
(1986) has found that the expected affect of Nobel Prize winners on the scientific com-
munity is reflected significantly in their citation records long before they receive the
prizes. In recent years, citation rates are becoming increasingly important in judging
the research quality of journals, institutions and departments, as well as individual
faculty members (Klein & Chiang, 2004).
Another related area is co-word analysis. This type of analysis assumes that an
article’s keywords symbolize the core content of its research topic and represent its
link with the research problems. One or more of concurrent keywords in articles con-
stitutes a link among studies (Tsai & Wu, 2010). The presence of many co-occurrences
around the same keyword(s) points to a locus of strategic alliance within studies that
may correspond to a research topic. Co-word analysis thus reveals patterns and trends
in a discipline through the associated strengths in the disciplines. In other words, co-
word analysis visualizes the intellectual structure of a discipline into maps of the con-
ceptual space, thus a time series of maps produces a trace of changes in the intellectual
domain (Ding, Chowdhury, & Foo, 2001).
(c) citations are made to the best possible works. However, the central problem of
citation counts for measuring research impact is that standards and conventions of
citation are not precisely formalized (Cronin, 1982). A citation itself cannot disclose
sufficient understanding on why exactly an author cites a certain item; neither can it
reflect what specific aspect the author actually refers to in the cited item (Brooks,
1986). Therefore, exploring citing behaviors and factors determining when and why
scholars cite others has been an area of study. Scientometricians have examined dif-
ferent citing behaviors in various disciplines. We summarize three major representa-
tives of taxonomy on citing behaviors in Table 1.
The literature has also explored factors that influence citing behaviors in various disci-
plines from multiple dimensions, including (a) time, (b) disciplinary, (c) journal, (d) arti-
cle, and (e) author/reader dimensions (Bornmann & Daniel, 2008). With the passage of
time and increases in research output, citations become more frequent, and it is expected
to see citations for more recent publications than to older ones because more research
advances are available (Cawkell, 1976). Also, the more frequently a publication is cited in
the past, the more frequently it will be cited in the future. Research has shown that the
expected number of future citations is a linear function of the current citations (Burrell,
2003; Cano & Lind, 1991). Furthermore, studies have revealed that citing behaviors and
practices vary among disciplines (Braun, Glänzel, & Grupp, 1995a, 1995b; Hargens,
2000; Hurt, 1987) and even within different areas of a subdiscipline (Klamer & van
Dalen, 2002). In some disciplines, scholars cite recent literature more often than in others
(Peters & van Raan, 1994). As the possibility of being cited is associated with the number
of journals in a discipline (Moed, Burger, & Frankfort, 1985), smaller disciplines would
generate fewer citations than more general fields (King, 1987). This is likely to be the case
for HRD compared to psychology or management.
The citation of an article is found to be dependent on the frequency of publication
of journals containing related articles (Stewart, 1983). The order of an article arranged
in a journal issue also affects the citation rate (Smart & Waldfogel, 1996). A journal’s
accessibility, visibility, and internationality as well as the quality or prestige influence
the probability of citations (Boyack & Klavans, 2005; Moed et al., 1985; Yue &
Wilson, 2004). Methodology or conceptual and literature review pieces tend to attract
more citations than empirical studies (Cano & Lind, 1991; MacRoberts & MacRoberts,
1996). We perceive this is likely to be the case for HRDR given its editorial policy on
conceptual and review articles.
Positive correlations have been identified between citation frequency and the num-
ber of coauthors, the length of an article, and the references listed in an article (Baldi,
1998; Boyack & Klavans, 2005). The article dimension is linked to the author/reader
dimension. As citations are often affected by social networks, authors tend to cite works
by those with whom they are personally acquainted (White & McCain, 1998). This
results in greater reciprocal exchange of citations over time (Cronin, 2005a). In the
HRD literature, although these phenomena can be observed, empirical investigation
may help understand the sociology of HRD research network and its relationship with
knowledge production.
Wang et al. 507
Author Taxonomy
Garfield (1962) • Paying homage to pioneers;
• Giving credit or homage to related work;
• Identifying methodology;
• Providing background reading;
• Correcting earlier work;
• Criticizing previous work;
• Substantiating claims;
• Alerting to forthcoming work;
• Providing leads to poorly disseminated, indexed, or
uncited work;
• Authenticating data and classes of fact;
• Identifying origin of an idea or concept;
• Disclaiming work or ideas of others (negative
claims);
• Disputing priority claims of others (negative
homage; p. 85).
Moravcsik and Murugesan (1975) • Conceptual versus. operational citations,
• Evolutionary versus. juxtapositional citations,
• Category, organic versus. perfunctory citations,
• Confirmative versus. negational citations,
• Valuable versus. redundant citations.
Bornmann and Daniel (2008) • Affirmational: For confirming, supporting, agreeing
of cited items; Assumptive: for assumed knowledge
or background in cited items; Conceptual: for
conceptual or theoretical support;
• Contrastive: for contracting or alternatives;
• Methodological: for methodological support;
• Negational: for questioning, correcting or negatively
evaluating cited items;
• Perfunctory: citing without additional comment, or
a redundant reference, or citing is not apparently
relevant to the author’s immediate concerns;
• Persuasive: citing as a “ceremonial fashion,” or to a
recognized authority in the field.
Note: Summarized from Bornmann and Daniel (2008), Garfield (1962), and Moravcsik and Murugesan
(1975).
reality because of the complexity of, and unregulated, citing behaviors that can make
analysis problematic or even distort citation analysis (MacRoberts & MacRoberts,
1996, 2010; Radicchi & Castellano, 2012). Of the problems identified, we highlight
those likely to be encountered in HRD research and publications.
First, because of the oversight or lack of awareness, scholars may not cite most
influential and relevant items in their work (Garfield, 1980; MacRoberts & MacRoberts,
1996). Second, biased citing is frequently found in published items. MacRoberts and
MacRoberts (1987) uncovered that only 37% of 13 facts were cited correctly in 93
citations in a study of the history of genetics, and the citing was highly biased. Notably,
an important cause of citing bias revealed in the same study is secondary source cita-
tions. Of the citations analyzed,
38% were to secondary sources; that is, over one-third of the “credit” given was
taken from the discoverer and allotted to someone who had nothing to do with
the discovery. (p. 344)
A third problem is perceived excessive self-citing. It was estimated this type of cita-
tions varying from 10% to 30% of the overall citations depending on disciplines
(MacRoberts & MacRoberts, 1996). Through text analysis and expert interview, Hyland
(2003) has found that self-citing “reflect(s) both the promotional strategies of individu-
als and the epistemological practices of their disciplines” (p. 251). More recently, in
scrutinizing four psychological journals published in 2006 and 2007, Brysbaert and
Smyth (2011) have reported 0% to 45% of self-citation and concluded that it is “a self-
serving bias motivated by self-enhancement and self-promotion” (p.129).
All three problems may exist in the HRD research. The first problem sometimes
may be addressed through peer-review process. The second problem, especially cred-
iting to a wrong source through secondary referencing can be observed. The self-citing
in HRD is also not uncommon. Studies exploring these issues in the HRD publications
may enhance our awareness of the status and severity for conscientious research rigor.
The issue of multiple authorship. How to allocate appropriate credit among multiple
coauthors affects the accuracy and credibility of citation analysis on scholarly impact.
Traditional practices have been (a) exclusive counting—giving the full credit to the
first author (MacRoberts & MacRoberts, 1996), (b) inflationary counting—distributing
the full credit repeatedly to all coauthors, or (c) equalizing counting—dividing the
credit among all coauthors with equal fraction (Eggert, 2011; Hagen, 2008). Given its
importance on research ethics and scholarly performance, the literature has been explor-
ing solutions in two areas. One is to advocate changing “authorship” to “contributor-
ship” for “disclosing to the reader of every participant’s contribution to the work and to
the manuscript” (Rennie, Yank, & Emanuel, 1997, p. 582). Guidelines have been pro-
posed to make the contributions of coauthors more specific and transparent in addition
to coauthor ranking (Eggert, 2011; Frazzetto, 2004). However, these guidelines are not
widely known and the practices have been rare. The second area is to develop analyti-
cal tools for allocating credit of coauthorship. Recently, a p-index has been proposed
Wang et al. 509
to give credit according to authorship rank and number of coauthors (Prathap, 2011).
It is to “combine fractional credit simultaneously on a paper and citation basis for each
paper to compute the fractional value” for each coauthor (p.240). The p-index measure
is still based on the assumption of “contributorship” and has not been empirically
tested nor widely adopted.
publishable unit for the purpose of driving up citations (Maddox, 1989). (e) Self-
citation: It is accepted, up to a limit, that self-citation can be justified (e.g., demonstrat-
ing a track record, or authority of, research). But when it surpasses the acceptable
limit, it should be discounted. Yet, the Institute for Scientific Information (ISI) has not
specified the “acceptable limit” and sometimes used 20% as a loose measure (Taubes,
1993).
Nonetheless, (S)SCI has gained popularity and an authoritative position on
indexing quality journals and has been generally regarded as the “gold standard
for databases offering indexing in the social sciences” (Bedeian, Van Fleet, &
Hyman, 2009; p. 216). It has received praises widely as represented by the follow-
ing quote:
total number of citations received by the cited journals (Garfield, 2006). The half-1ife
index is an indicator of the rate at which a journal’s articles become obsolete. This, in
turn, may reflect the rate of obsolescence of knowledge in a journal’s subject area. For
example, if a journal has a half-life of 4. It means that half of the citations the journal
receives in a given year is to articles published during the previous four years. The
remaining years’ citations to the journal are dispersed among all the articles it pub-
lishes since the journal is founded. Research has found that these metrics are disciplin-
ary specific (Garfield, 1999).
Of the many controversies about the JIF, Hoeffel (1998) explained the reason for its
popularity:
Experience has shown that in each specialty the best journals are those in which
it is most difficult to have an article accepted, and these are the journals that
have a high impact factor. Most of these journals existed long before the impact
factor was devised. The use of impact factor as a measure of quality is wide-
spread because it fits well with the opinion we have in each field of the best
journals in our specialty. (p. 1225)
sensitive to one or more highly cited publications of the unit, thus likely to underesti-
mate a journal or a scholar’s contribution (Egghe, 2006).
Egghe (2006) derives a g-index that offers “an improvement of the h-index to mea-
sure the global citation performance of a set of articles” (p. 131). The g-index is
defined as, given a set of articles ranked in decreasing order of citations that a journal
or a scholar receives, it is the unique largest number such that the top g articles received
together at least g2 citations (Egghe, 2006). The g-index has gained popularity in the
literature because of its obvious advantage. Also, to complement h-index’s lacking of
consideration on time-related factor, Sidiropoulos, Katsaros, and Manolopoulos
(2007) derived a Contemporary h-index (hc-index). Retaining the advantages of origi-
nal h-index, the hc-index adds an age-related weight to each cited article, giving less
weight to older articles.
Most recently, a new complement, e-index was offered by Zhang (2009) to address
the following weakness of the h-index. In its current form h-index cannot account for
excess citations beyond the resulting needed h-score for different scholars or journals.
This means that using h-index for a group of researchers or journals, they may either
obtain an identical h-index with different citation counts. Or it is likely that a scholar
or a journal may have received more citations distributed among multiple publica-
tions, but obtaining a lower h-score than others. The e-index assumes the unit under
study has at least h2 citations, but uses those excess citations that have not been used
for the e-index calculation. The e-index has received acceptance in the matter of
months (Dodson, 2009).
Furthermore, a measure known as eigenfactor, has been introduced to exclusively
and comprehensively assess journal performance (Bergstrom, 2007). Through a math-
ematical algorithm, eigenfactor uses an iterative ranking scheme to identify and rank
journals, which are cited by other high ranking ones. It ranks journals by measuring
“the total influence of a journal on the scholarly literature or, comparably, the total
value provided by all of the articles published in that journal in a year” (p. 315).
Eigenfactor creates immediate effect on scientometrics research and has been incorpo-
rated into JCR since 2007 (Fersht, 2009).
individual scholar research impact analysis (Harzing & van der Wal, 2008). The soft-
ware is well received by the worldwide scholarly communities (Harzing. 2010).
The advances in assessing and evaluating research and the improved convenience
in data sources and tools provide opportunities for HRD scholars for research and
assessment. We discuss the implications and future research in HRD in relation to
potential applications.
indicators of scholarly legitimacy. Particularly, the study used citation data from
Google Scholar to measure the consequential legitimacy, and used editorial board
members’ citation rates and h-index for the leadership legitimacy. The study have
derived important future directions for MLE scholars and editors to advance the legiti-
macy of MLE scholarship (Rynes & Brown, 2011).
Future research may further explore the interactions of HRD research topics and
knowledge structure. For example, literature has debated on what constitutes a founda-
tion of HRD (Swanson, 1999). Yet it has not been clear about the acceptance of the
three legged stool versus. the centipede model. With a citation and/or co-word analysis,
one may identify the relevancy and the influence of the models. The outcome of such
studies is likely to derive new theoretical perspectives. Similar research may also be
conducted on HRD definitional research and other critical areas (Wang & Sun, 2009).
Last but not the least, understanding scientometric analysis is essential for HRD
scholars conducting relevant, rigorous, and productive research. As citations are to
give credit to the cited items, scholars are required to cite carefully based on the con-
tent, context, and the relevance of cited studies and avoid citing secondary references
to credit wrong sources. Yet, it is not uncommon that some HRD articles cite literature
out of the context and relevance to justify an argument. Self-regulated citing practice
can not only contribute to the quality of research, but also add to the citation raw data
for meaningful scientometric analysis at a later time.
Performing regular self-assessment of citation analysis has been proved to be an
effective strategy for individual scholars’ career development (Dodson, 2009). For
HRD scholars, self-citation analysis and related h, e, and g scores can indicate one’s
research impact beyond the number of publications. It can offer a quantifiable support
for career development, such as tenure, promotion, or annual performance reviews. It
may also provide insight and guidance for effectively allocating personal resources.
For example, if one finds that a published work has not been cited for an extended
period of time, it may indicate a time to develop a new research topic unless one has
confidence that it will become a late citation buster know as sleeping beauty (Garfield,
1996). For doctoral students or junior scholars, general citation analysis may also offer
insight during the process of developing one’s research agenda and priority.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of
this article.
References
Baldi, S. (1998). Normative versus social constructivist processes in the allocation of citations:
A network-analytic model. American Sociological Review, 63, 829-846.
Wang et al. 515
Bayer, A. E., & Folger, J. (1966). Some correlates of a citation measure of productivity in sci-
ence. Sociology of Education, 39, 381-390.
Bedeian, A. G., Van Fleet, D. D., & Hyman, H. H. III. (2009). Scientific achievement and edito-
rial board membership. Organizational Research Methods, 12, 211-238.
Bergstrom, C. (2007). Eigenfactor: Measuring the value and prestige of scholarly journals. Col-
lege & Research Libraries News, 68, 314-316.
Bornmann, C. L., & Daniel, H. (2008). What do citation counts measure? A review of studies
on citing behavior. Journal of Documentation, 64, 45-80. doi:10.1108/00220410810844150
Borgman, C. L., & Furner, J. (2002). Scholarly communication and bibliometrics. Annual
Review of Information Science and Technology, 36, 3-72.
Boyack, K. W., & Klavans, R. (2005). Predicting the importance of current papers. In
P. Ingwersen, & B. Larsen, (Eds.), Proceedings of the 10th international conference of
the international society for scientometrics and informetrics (pp. 196-203). Stockholm,
Sweden: Karolinska University Press.
Bradford, S. C. (1950). Documentation. Washington, DC: Public Affairs Press.
Braun, T. Glänzel, W., & Schubert, A. (1985). Scientometric indicators: A 32 country com-
parative evaluation of publishing performance and citation impact. Philadelphia, PA: World
Scientific.
Braun, T., Glänzel, W., & Grupp, H. (1995a) The scientometric weight of 50 nations in 27 sci-
ence areas, 1989-1993. 1. All fields combined: mathematics, engineering, chemistry and
physics. Scientometrics, 33, 263-293.
Braun, T., Glänzel, W., & Grupp, H. (1995b) The scientometric weight of 50 nations in 27 sci-
ence areas, 1989-1993. 2. Life sciences. Scientometrics, 34, 207-237.
Braun, T. Glänzel, W., & Schubert, A. (2006). A Hirsch-type index for journals. Scientometrics,
69, 169–173. doi:10.1007/s11192-006-0147-4
Brooks, T. A. (1986). Evidence of complex citer motivations. Journal of the American Society
for Information Science, 37, 34-36.
Brysbaert, M., & Smyth, S. (2011). Self-enhancement in scientific research: The self-citation
bias. Psychologica Belgica, 51(2), 129-137.
Bulick, S. (1978). Book use as a Bradford-Zipf phenomenon. College and Research Libraries,
39, 215-218.
Burrell, Q. L. (2003). Predicting future citation behavior. Journal of the American Society for
Information Science and Technology, 54, 372-378. doi:10.1002/asi.10207
Cano, V., & Lind, N. C. (1991). Citation life-cycles of 10 citation-classics. Scientometrics, 22,
297-312.
Cawkell, A. E. (1976). Documentation note—citations, obsolescence, enduring articles, and
multiple authorships. Journal of Documentation, 32, 53-58.
Cole, J. R., & Cole, S. (1973). Social stratification in science, Chicago, IL: The University of
Chicago Press
Cole, S. (1992). Making science: Between nature and society, Cambridge, MA: Harvard Uni-
versity Press.
Cronin, B. (1982). Norms and functions in citation—the view of journal editors and referees in
psychology. Social Science Information Studies, 2, 65-78.
516 Human Resource Development Review 11(4)
Cronin, B. (2005a). The hand of science: Academic writing and its rewards. Lanham, MD:
Scarecrow Press.
Cronin, B. (2005b). A hundred million acts of whimsy? Current Science, 89, 1505-1509.
Ding, Y., Chowdhury, G. G., & Foo, S. (2001). Bibliometric cartography of information retrieval
research by using co-word analysis. Information Processing & Management, 37, 817-842.
doi:10.1016/S0306-4573(00)00051-0
Dodson, D. V. (2009). Citation analysis: Maintenance of h-index and use of e-index. Biochemical
and Biophysical Research Communications, 387, 625-626. doi:10.1016/j.bbrc.2009.07.091
Eggert, L. D. (2011). Best practices for allocating appropriate credit and responsibility to
authors of multi-authored articles. Front Psychology, 2, 196. doi:10.3389/fpsyg.2011.00196
Egghe, L. (2006). Theory and practice of the g-index. Scientometrics, 69(1), 131-152.
doi:10.1007/s11192-006-0144-7
Fersht, A. (2009). The most influential journals: Impact factor and eigenfactor. Proceedings
of National Academy of Sciences of USA, 106, 6883-6884. doi:10.1073/pnas.0903307106s
Frazzetto, G. (2004). Who did what? Uneasiness with the current authorship is prompting
the scientific community to seek alternatives. European Molicular Biology Organization
Reports, 5, 446-448.
Garfield, E. (1955). Citation indexes for science. A new dimension in documentation through
association of ideas. Science, 133(3159), 108-111.
Garfield, E. (1962). Can citation indexing be automated? Essays of an Information Scientist
(Vol. 1, pp. 84-90). Philadelphia, PA: ISI Press
Garfield, E. (1972). Citation analysis as a tool in journal evaluation—journals can be ranked by
frequency and impact of citations for science policy studies. Science, 178, 471-479.
Garfield, E.(1980). Essays of an Information Scientist. Vol. 3. Philadelphia, PA.: ISI Press.
Garfield, E. (1986). Do Nobel Prize winner write citation classics? Current Content, 23, 3-8.
Garfield, E. (1990). How ISI selects journals for coverage: Quantitative and qualitative consid-
erations. Current Contents, 22(28), 5-13.
Garfield, E. (1996). How can impact factors be improved? British Medical Journal, 313, 411-413.
Garfield, E. (1997). Dispelling a few common myths about journal citation impacts. Scientist,
11(3), 11-15.
Garfield, E. (1999). Journal impact factor, a brief review. Canadian Medical Association Jour-
nal, 161, 979-911.
Garfield, E. (2006). The history and meaning of Journal Impact Factor. JAMA, 295(1), 90-93.
Geisler, E. (2005). The measurement of scientific activity: research directions in linking phi-
losophy of science and metrics of science and technology outputs. Scientometrics, 62,
269-284. doi:10.1007/s11192-005-0020-x
Gilbert, G. N. (1977). Referencing as persuasion. Social Studies of Science, 7, 113-122.
Gross, P. L. K., & Gross E. M. (1927). College libraries and chemical education. Science, 66,
385-389.
Hagen, N. T. (2008). Harmonic allocation of authorship credit: Source-level correction of bib-
liometric bias assures accurate publication and citation analysis. PLoS ONE, 3(12), e4021.
doi:10.1371/journal.pone.0004021
Wang et al. 517
Hargens, L. L. (2000). Using the literature: reference networks, reference contexts, and the
social structure of scholarship. American Sociological Review, 65, 846-865.
Harzing, A.W. (2010). Publish or perish, version 3.0. Retrieved from www.harzing.com/pop
.htm
Harzing, A., & van der Wal, R. (2008). Google scholar as a new source for citation analysis?
Ethics in Science and Environmental Politics, 8(1), 62-72. doi:10.3354/esep00076
Hirsch, J. E. (2005). An index to quantify an individual’s scientific output, Proceedings of
the National Academy of Sciences of the United States of America, 102, 16569-16572.
doi:10.1073/pnas.0507655102
Hoeffel C. (1998). Journal impact factors. Allergy, 53, 1225. doi:10.1111/j.1398-9995.1998.
tb03848.x
Hurt, C. D. (1987). Conceptual citation differences in science, technology, and social sciences
literature. Information Processing & Management, 23, 1-6.
Hyland,K. (2003) Self-citation and self-reference: Credibility and promotion in academic pub-
lication. Journal of the American Society for Information Sicence and Technology, 54(3),
251-259.
Jeung, C. W., Yoon, H. J., Park, S., & Jo, S. J. (2011). The contributions of human resource
development research across disciplines: A citation and content analysis. Human Resource
Development Quarterly, 22(1), 87-109. doi:10.1002/hrdq.20062
Jo, S. J., Jeung, C., Park, S., & Yoon, H. (2009). Who is citing whom: citation network analysis
among HRD publications from 1990 to 2007. Human Resource Development Quarterly,
20(4), 503-537. doi:10.1002/hrdq.20023
King, J. (1987). A review of bibliometric and other science indicators and their role in research
evaluation. Journal of Information Science, 13, 261-276.
Klamer, A., & van Dalen, H. P. (2002) Attention and the art of scientific publishing. Journal of
Economic Methodology, 9, 289-315. doi:10.1080/1350178022000015104
Klein, D. B., & Chiang, E. (2004). Citation counts and SSCI in personnel decisions: A survey of
economics departments. Economics Journal Watch, 1, 166-174.
Knorr-Cetina, K. (1981) The Manufacture of knowledge: An essay on the constructivist and
contextual nature of science Oxford, UK: Pergamon.
Lederberg, J. (2000). How the science citation index got started. In Cronin, B., Atkins, H. B.
(Eds.), The web of knowledge (pp. 25-64). Medford, New Jersey: American Society for
Information Science.
Leydesdorff, L. (2001). The challenge of scientometrics: The development, measurement, and
self-organization of scientific communications (2nd ed). Leiden, Netherlands: Universal.
Lotka, J. A. (1926). The frequency distribution of scientific productivity. Journal of the Wash-
ington Academy of Sciences, 16, 217-223.
Maddox, J. (1989). Is the salami sliced too thinly? Nature, 342, 733.
MacRoberts, M. H., & MacRoberts, B. R. (1987). Another test of the normative theory of citing.
Journal of the American Society for Information Science, 38, 305-306.
MacRoberts, M. H. & MacRoberts, B. R. (1996). Problems of citation analysis. Scientometrics,
36, 435-444. doi:10.1007/BF02129604
518 Human Resource Development Review 11(4)
Schoonhaert, D., & Roelants, G. (1996). Citation analysis for measuring the value of scientific
publications: Quality assessment tool or comedy of errors? Tropical Medicine & Interna-
tional Health, 1, 739-752. doi:10.1111/j.1365-3156.1996.tb00106.x
Sidiropoulos, A. Katsaros, D., & Manolopoulos, Y. (2007). Generalized h-index for disclos-
ing latent facts in citation networks. Scientometrics, 72, 253-280. doi:10.1007/s11192-007-
1722-z
Small, H. (1982). Citation context analysis. In B. Dervin & M. Voigt (Eds.), Progress in com-
munication science (Vol. 3, pp. 287-310). Norwood, NJ: Ablex.
Smart, S., & Waldfogel, J. (1996). A citation-based test for discrimination at economics and
finance journals. (NBER Working Paper, No. 5460). Cambridge, MA: National Bureau of
Economic Research.
Smith, L. C. (1981). Citation Analysis. Library Trends, 30, 83-106.
Smith, R. (2006). Peer review: a flawed process at the heart of science and journals. British
Journal of the Royal Society of Medicine, 99, 178-182.
Stewart, J. A. (1983). Achievement and ascriptive processes in the recognition of scientific
articles. Social Forces, 62, 166-189.
Swanson, R. A. (1999). HRD theory, real or imagined? Human Resource Development Interna-
tional, 2(1), 2-5.
Taborsky, M. (2007). Impact factor statistics and publication practice: What can we learn?
Ethology, 113, 1-8. doi:10.1111/j.1439-0310.2007.01329.x
Taubes, G. (1993). Measure for measure in science: How citation analysis and Science Watch,
its primary showcase, are turning science into a numbers game and stirring mixed feeling
among researchers? Science, 260, 884-886.
Testa, J. (2008). The Thompson Scientific journal selection process. Contributions to Science,
4 (1), 69-73.
Tijssen, R. J. W., van Leeuwen, T. N., & van Raan, A. F. (2002). Mapping the scientific per-
formance of German medical research. An international comparative bibliometric study,
Stuttgart, Germany: Schattauer.
Tsai, W., & Wu, C. (2010). Knowledge combination: A cocitation analysis. Academy of Man-
agement Journal, 53, 441-450.
Van den Besselaar, P., & Leydesdorff, L. (1996). Mapping Change in scientific specialties:
A scientometric reconstruction of the development of artificial intelligence. Journal of
the American Society for Information Science, 47, 415-436. doi:10.1002/(SICI)1097-
4571(199606)
van Raan, A. F. J. (2005). For your citations only? Hot topics in bibliometric analysis. Measure-
ment: Interdisciplinary Research and Perspectives, 3, 50-62.
Wang, G. G., Rothwell, W. J., & Sun, J. Y. (2009). Management development in China: A policy
analysis. International Journal of Training and Development, 13, 205-220.
Wang, G. G., & Spitzer, D. R. (2005). HRD measurement & evaluation: Looking back and
moving forward. Advances in Developing Human Resources, 7(1), 6-15. doi:10.1177/
1523422304272077
Wang, G. G., & Sun, J. Y. (2009). Clarifying the boundaries of human resource development.
Human Resource Development International, 12(1), 93-103.
520 Human Resource Development Review 11(4)
White, H. D., & McCain, K. W. (1998). Visualizing a discipline: An author co-citation analysis
of information science, 1972-1995. Journal of the American Society for Information Sci-
ence, 49, 327-356.
Yue, W., & Wilson, C. S. (2004). Measuring the citation impact of research journals in clini-
cal neurology: A structural equation modelling analysis. Scientometrics, 60, 317-332.
doi:10.1023/B:SCIE.0000034377.93437.18
Zhang, C. T. (2009). The e-index, complementing the h-index for excess citations, PLoS ONE,
5, e5429. doi:10.1371/journal.pone.0005429
Zipf, G. K. (1972). Human behavior and the principle of least effort. New York, NY: Hafner.
Bios
Greg G. Wang, PhD, is a professor in human resource development in the College of Business
and Technology at The University of Texas at Tyler. He is also the editor of Journal of Chinese
Human Resource Management.
Jerry W. Gilley, EdD, is a professor and the chair of the Department of Human Resource
Development and Technology in the College of Business and Technology at The University of
Texas at Tyler. He has served as the president of Academy of Human Resource Development.
Judy Y. Sun, PhD, is an assistant professor in human resource development in the College of
Business and Technology at The University of Texas at Tyler. Her research interest is in career
development, HRD theory building research, and economic analysis of HRD.