Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Accounting Forum

ISSN: 0155-9982 (Print) 1467-6303 (Online) Journal homepage: https://www.tandfonline.com/loi/racc20

The determinants of a successful accounting


manuscript: Views of the informed

Tony Brinn & Michael John Jones

To cite this article: Tony Brinn & Michael John Jones (2008) The determinants of a successful
accounting manuscript: Views of the informed, Accounting Forum, 32:2, 89-113, DOI: 10.1016/
j.accfor.2007.12.002

To link to this article: https://doi.org/10.1016/j.accfor.2007.12.002

Published online: 28 Feb 2019.

Submit your article to this journal

Article views: 10

View related articles

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=racc20
Available online at www.sciencedirect.com

Accounting Forum 32 (2008) 89–113

The determinants of a successful accounting


manuscript: Views of the informed
Tony Brinn 1 , Michael John Jones ∗,2
Cardiff Business School, Colum Drive, Cardiff CF10 3EU, UK

This paper is dedicated to the memory of Tony Brinn.

Abstract
Publishing academic articles is a key measure by which the performance of modern academics is judged. However, there is a
surprising lack of guidance to authors about which manuscript characteristics influence journal acceptance decisions. This article
reports the results of a study into the perceptions of 129 editorial board members of 87 statements relating to the publishability
of manuscripts in accounting journals. The respondents indicated certain factors which enhanced the chances of publication (e.g.,
statistical significance, originality, appropriate statistics and application of theory) and which detracted from publication (e.g.,
replication, secondary data, polemic pieces, overlapping work and accessibility). Reviewers from the perspective of US journals
were more concerned with statistical significance, the use of appropriate statistics, and the generalisability of samples and case
studies. There was a high degree of consistency in the results. We found no systematic differences between those adopting critical
as opposed to mainstream research.
Crown Copyright © 2008 Published by Elsevier Ltd. All rights reserved.

Keywords: Journal submissions; Manuscript characteristics; Publishability

1. Introduction

Publishing academic articles is a key measure by which the performance of modern accounting academics
is judged. For individuals, a successful publication record leads to institutional rewards such as tenure, pro-
motion and salary increments (Gomez-Mejia & Balkin, 1992). Editorial board membership generally reflects
publication records and membership (Brinn & Jones, 2007). In addition, in the wider academic community indi-
viduals benefit from enhanced visibility, peer esteem and the creation of an academic reputation (Chung, Park,
& Cox, 1992). For institutions, too, there are marked benefits. Institutions with successful publishers gain pres-
tige. Tables of top publishing institutions are regularly published in academic journals and indirectly capture this
prestige (Heck, Jensen, & Cooley, 1990; Stahl, Leap, & Wei, 1988). More directly, in countries such as the UK,
successful institutions are financially rewarded through institutional mechanisms such as research assessment exer-
cises. Finally, there is much research into journal rankings (e.g., Ballas & Theoharakis, 2003; Brinn, Jones, &

∗ Corresponding author. Tel.: +44 29 20874 000; fax: +44 29 20874 419.
E-mail address: JonesM12@cardiff.ac.uk (M.J. Jones).
1 Tony Brinn was a lecturer in accounting, but tragically died in June 2005.
2 Michael John Jones is Professor of Financial Reporting from Cardiff University.

0155-9982/$ – see front matter. Crown Copyright © 2008 Published by Elsevier Ltd. All rights reserved.
doi:10.1016/j.accfor.2007.12.002
90 T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113

Pendlebury, 1996; Hasselback & Reinstein, 1995; Lowe & Locke, 2005), and into the publication process more
generally (e.g., Beattie & Goodacre, 2004; Brown, Jones, & Steele, 2007; Humphrey, 2001; Jones & Roberts,
2005).
Most academic accounting journals follow relatively similar procedures when determining whether or not to publish
articles submitted to them. On receipt of the manuscript the journal editor will choose one, but more generally two,
reviewers. ‘[R]eviewing is a demanding, difficult, and time-consuming task’ (Jauch & Wall, 1989, p. 168). In accounting,
these reviewers will usually be knowledgeable and experienced accounting academics. The articles are then sent to
these reviewers. Under the system of double-blind review, the reviewers are not formally notified of the author’s
name, or the author of the reviewers’ identity. However, given the nature of the prepublication process (conference
papers, seminar presentations and working papers) the reviewers may well be able to deduce the author’s identity. The
reviewers will review the papers using their own experience about what constitutes a publishable paper as well as,
in some instances, some broad guidelines to authors published by the journal. The reviewers then write a report that
outlines the reviewers’ perception of the strengths and weaknesses of the paper. They also recommend to the editor
whether the paper should be rejected, revised and resubmitted, or published as it is. On the basis of the reviewers’
reports, and perhaps also on their own reading of the manuscript, the editor will inform the author whether the paper
is accepted, rejected or to be revised and resubmitted. In some journals, rejection rates may run to 90%. Generally,
successful authors will revise and resubmit papers several times before ultimate acceptance. Indeed, Ellison (2002)
suggests that over the last few decades the review and publication process has lengthened and grown more complex
across academia as a whole.
Despite the importance of this process, there is little formal guidance to authors or reviewers on the determi-
nants of a successful article either from journal editors or from published evaluation criteria (Cummings, Frost, &
Vakil, 1985). Some journals do provide very general review criteria checklists, but these usually do not offer spe-
cific guidance. There is also a surprising absence in the accounting, and also in the broader academic literature,
of the manuscript characteristics which influence acceptance decisions. Indeed, we found only two articles which
systematically attempt to rate manuscript characteristics (Kerr, Tolliver, & Petree, 1977) in management and social
science and Czyzewski and Dickinson (1990) in accounting. Both used identical methodologies. Gradually, over
time, authors and reviewers gain experience of what does and what does not constitute best practice. However,
individuals differ in their experiences, knowledge and judgement. Individual judgements of the determinants of a
good article will also vary. In addition, researchers may be influenced by the type of research they conduct. For
example, there may be a difference between those who conduct ‘critical’ research and those more traditional aca-
demics that focus on statistical, quantitative methodologies (see, for example, Reiter & Williams, 2002; Tinker, 2006).
Tinker (2006), for example, shows that critical journals focus more on corporate failures and are more relevant to
practitioners.
The importance of the accounting review process and our lack of knowledge of the characteristics of successful
manuscripts all provide the motivation for the current paper. This article aims to throw some light on the accounting
review process by surveying the views of experienced reviewers of accounting journals. It examines the elusive
and tricky question of what determines a good journal article. In particular, it attempts to identify characteristics of
manuscripts that make them more or less likely to be accepted by reviewers. Evidence is also provided on whether
different reviewers are broadly consistent in their views of these manuscript characteristics.
This research will also permit potential authors to target research characteristics that enhance the possibility of an
ultimately successful submission. If researchers have more appreciation of desirable research characteristics they will
tend to produce better, more readily publishable, work. Our research provides some initial data on this issue as well as
differentiating between the research characteristics of UK and US journals. We also investigate the issue of whether
critical researchers assess manuscripts in a different way to more traditional researchers.
As well as providing an aggregate picture, this research provides evidence on whether the characteristics of review-
ers and of the journals for which they review influence their accept/reject decision. This information may provide
information to journal editors when they are selecting reviewers.
The rest of the paper is organised into five sections. A review of the relevant literature is next. This is divided
into general background and also specific studies, including the one study on accounting. A description of our meth-
ods then follows. The results are presented next. We then discuss the paper drawing upon the prior research and
discuss the limitations. In the final section, we summarise our results, discuss their implications and suggest future
research.
T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113 91

2. Literature review

2.1. General background

This research is conducted against the background of much recent concern into the whole nature of the research
process and publication. In particular, there has been concern with two particular themes: elitism and the dominance
of quantitative research. The research into elitism has been conducted in a variety of contexts. Williams and Rodgers
(1995) and Rodgers and Williams (1996) investigate elitism in The Accounting Review. They find that The Accounting
Review is dominated by certain scholars from a select band of US universities. These ‘elite’ individuals dominate the
publication process as both authors and reviewers. Lee (1997) looked at six highly rated journals: three from the US
(The Accounting Review, The Journal of Accounting Research, and The Journal of Accounting and Economics), two
UK journals (Accounting Organizations and Society and Accounting and Business Research). His findings confirm a
dominant presence of elite academics on these journals’ editorial boards. This finding is confirmed by Brinn and Jones
(2008). In a study of UK academics, they find that an elite group of UK institutions and individuals dominate UK
editorial board membership. More broadly, Lee and Williams (1999) demonstrate that there is controlling elite in the
US research community. Finally, Chan, Chen, and Cheng (2007, p. 187) show that in the publication process there is a
significant elite degree effect where authors ‘who graduated from elite accounting programmes have a disproportionate
share of publications in top-rated journals’.
The research into the dominance of quantitative research is mainly US based. Researchers, such as Reiter and
Williams (2002), Tinker (2001), Briloff (2004) and Williams, Jenkins, and Ingraham (2006) show that mainstream
US academic research faces severe problems. Reiter and Williams (2002, p. 576) show that there was a ‘revolutionary
change in accounting research in the 1960s and 1970s from a so-called a priori, normative approach to an empirical,
economic based research programme’. This confirms research by authors such as Fleming, Graci, and Thompson
(2000).
Reiter and Williams (2002) show this positive accounting research agenda was seriously criticised in the 1990s for
not leading practice, for lacking cycles of significant innovation, for not addressing any of the fundamental issues in
accounting and for not creating a demand in the wider business world for accounting academics. Tinker (2001, p. 151)
commenting on Abraham Briloff’s, a distinguished US professor, dispute with the American Accounting Association
criticised many US accounting professors as being essentially financial economists with only a ‘remote inkling of
accounting practice’. This confirms Whitley’s (1986) finding of the transformation of business finance into financial
economics. Briloff (2004, p. 790) argues strongly that accounting has lost its way. ‘First-rate accounting scholars,
carrying on their research as second-rate finance-economic scholars, e.g., the efficient market, working with third rate
mathematical models, programmed with fourth rate aggregate data, collated from fifth rate databanks, compiled by
sixth rate drones.’ As a result research with little relationship to the real world of accounting concepts and practices finds
its way into prestigious academic journals. This, in turn, leads to promotion of these individuals and distinguished
professorships. This then sets the pattern for the next generation of accounting academics. Finally, Williams et al.
(2006) show that behavioural research in the US has been squeezed out by positive, empirical research.

2.2. Specific studies

We first review the only two studies (both perceptions-based) which systematically rate manuscript characteristics
(Czyzewski & Dickinson, 1990; Kerr et al., 1977). We then review four other studies which list, but do not system-
atically rate, individual manuscript characteristics (Campion, 1993; Campbell, 1982; Gottfredson, 1978; Mitchell,
Beach, & Smith, 1985). Finally, we look at several studies which cover, but whose main focus is not, manuscript
characteristics.
Czyzewski and Dickinson (1990) is the only prior study which we found that covers the characteristics of accounting
journals. They followed closely the approach used by Kerr et al. (1977) whose findings are discussed below. A six-
point Likert scale was used with 1 indicating a likelihood of acceptance and 6 indicating a sure rejection. In total, 318
usable responses were received from reviewers of 20 journals. In all, 37 questions were asked covering references,
data type, author’s affiliation, previous presentation, statistically insignificant findings, replication studies, manuscript
content, and manuscripts without new data, reviewer opinion, study generalisability, author reputation, design and
analysis characteristics and manuscript length. The overall conclusion was that several factors increased the likelihood
92 T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113

of rejection ‘including non-significant results, no new information, inclusion of the full manuscript in the proceedings
of a regional or national meeting, lack of generalisability of results, lack of a control group, and a topic outside the
mainstream of the field’ (Czyzewski & Dickinson, 1990, p. 103). Two factors increased the likelihood of acceptance:
‘if they contain a new theory with significant results or a topic of interest to the field which differs in content from
articles traditionally published in the journal’ (Czyzewski & Dickinson, 1990, p. 104). Czyzewski and Dickinson do
not give the means of the individual characteristics so it is difficult to form an overall picture of their results.
Kerr et al. (1977) in an earlier study looked at the same 37 items as rated by 301 individuals from the editorial
and advisory boards of 19 leading management and social science journals. Their findings were that ‘a number of
characteristics impaired publication: replication studies, manuscripts with no new data, articles on the same topic as
many recent journal articles, articles from outside the mainstream and manuscripts previously presented at professional
association meetings and reproduced in proceedings’. Three characteristics increased likelihood of acceptance: strong
author reputation, successful test of authors’ own theory and content different from that traditionally published in the
journal.3 Like Czyzewski and Dickinson (1990), Kerr et al. (1977) do not give means for individual statements so
overall a comparison of their results is impeded.
Gottfredson (1978) drew up 83 statements describing attributes of psychological journal articles and submitted
them to all editors, associate editors and consulting editors of nine psychological journals from 1968 to 1975. He
received 299 responses. Using principal components analysis, nine components represented 49.6% of the variance.
These were a list of don’ts (e.g., problem has not been considered carefully enough; the design used does not justify
the conclusions drawn), a list of substantive dos (e.g., it attempts to unify the field); a list of stylistic dos (e.g., it is well
written); originality and heurism (e.g., it makes the reader think about something in a different way), trivial (e.g., the
problem is trivial), scientific advancement (e.g., it speaks to central problems); data grinding (e.g., heavy on results,
light on discussion), brute empiricism (e.g., precisely the same procedures as everybody else), and narrowness (e.g.,
small amounts of data from large research project). Unfortunately, given the lack of systematic ranking it is difficult
from this study to ascertain clearly the relative importance of the 83 criteria. This impairs the usefulness of this study.
Campbell (1982) was editor of the Journal of Applied Psychology from 1976 to 1982. He records some reasons for
manuscript rejection in his closing editorial address. They are drawn from his personal observation not empiricism and
are not ranked in any way. The five reasons were: the procedure could not answer the questions asked; non-meaningful
questions were addressed; manuscript not understandable; low statistical power; and repetition of prior work, especially
theory. He also discussed the importance of non-significant results, and his opinion (which differed from others), was
that these are not a particularly important reason for rejection.
Mitchell et al. (1985) sample 99 respondents from the review boards of five organisational behaviour boards.
They investigate four broad categories of manuscript characteristics (importance of the work; methodology; logical
considerations; and presentation). These were divided into 12 sub-categories. Their respondents perceived that a
manuscript’s contribution (i.e., advances knowledge; extends previous work to a significant degree and acts as a bridge
between studies) was overwhelmingly important.
Campion (1993), editor of Personnel Psychology drew up a comprehensive checklist of criteria for reviewing articles.
This comprised 246 criteria across 15 categories (importance of topic; literature review; conceptual development;
additional criteria for literature review and conceptual papers; sample and setting; measurement; design – experimental
and quasi-experimental; design-non experimental and cross-sectional; measurement design – meta-analysis; design-
qualitative; procedures; data analysis and results; discussion and conclusions; presentation and contribution). Although
comprehensive these criteria are in no way rated or evaluated and, therefore, impossible to rank in importance.
Four further articles cover reviewer characteristics (Bakanic, McPhail, & Simon, 1987; Beyer, Chanove, & Fox,
1995; Daft, Griffin, & Yates, 1987; Gilliland & Cortina, 1997). Bakanic et al. (1987) look at peer review and editorial
decision-making processes for manuscripts submitted to the American Sociological Review between 1977 and 1981.
They investigated manuscript characteristics (submission rate of manuscripts, substantive area, number of persons,
methods of data collection and methods of data analysis) as determinants of reviewers’ recommendations. The only
significant result was that qualitative data analysis was more likely to be rejected. Daft et al. (1987) investigate 56
organisational scholars’ reports about their ‘significant’ and ‘not-so-significant’ research projects. The scholars rated
novel, original or creative research as the key characteristic of significant research.

3 Of course, author reputation is only relevant when the reviewers can deduce the author(s) identity.
T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113 93

Beyer et al. (1995) investigate 400 manuscripts submitted to the Academy of Management Journal between 1984 and
1987. They tested seven manuscript characteristics: length of documentation (number of tabular manuscript pages and
number of references), acknowledgement of funding, whether submitter was first author, claims of novelty, claims of
disconfirmation of prior research; sophistication of statistics; non-significant results and clarity of the paper. Novelty
claims, claims of disconfirmation of previous research and clearly written papers were significantly more likely to
receive positive editorial decisions, while non-significant results were more likely to be rejected.
Finally Gilliland and Cortina (1997) look at 823 submissions to the Journal of Applied Psychology with respect
to author and paper characteristics, reviewer evaluations and editor decisions. They found that ‘reviewers and editors
appeared to pay particular attention to the adequacy of the research design, operationalization of constructs and
theoretical developments’ (Gilliland & Cortina, 1997, p. 427).

3. Methods

Seven-hundred-and-fifty-six members of editorial boards of accounting journals were sent the questionnaire in
November 2000. Editorial board lists were chosen as the most likely way of identifying reviewers across a broad
spread of journals. We selected an extensive list of journals as our sampling basis as our purpose was to gain a
comprehensive picture of all reviewers’ attitudes. We, therefore, did not choose a particular subset of journals (e.g.,
history, international or critical) which would have given us views of particular constituencies). The journals used were
accounting journals that had appeared in previous peer review studies supplemented by a small number of relatively
recent journals. Where individuals appeared on the editorial board of one or more journals they were sent a single copy
of the questionnaire.
Replies were received up to April 2001. In all 129 usable responses were received giving a response rate of 17.1%.
The data were tested for non-response bias by analysing the responses of the last 10% of respondents. This group
was compared with the early responders using a Kruskal–Wallis test to examine median differences in responses.
Significant differences were found in 9 of the 87 statements.4 We conclude there is no strong evidence of non-response
bias.
Conceptually, reviewers will have been influenced by their own individual life experiences, and the type of research
and journals in which they publish. We, therefore, also gathered information on the personal characteristics of respon-
dents in the questionnaire and on the journals for which individuals reviewed. This was voluntary, however, and some
respondents did not provide all the data requested. Of the respondents who provided data, 101 held the rank of full
professor. The other respondents were generally senior faculty (mainly associate professor (US) or senior lecturer
(UK)). Geographically, 83 of the respondents were based in US universities, 21 in UK universities, eight in Australian
universities and eight in universities elsewhere. Ninety-two of the respondents held a professional accounting qualifi-
cation and 118 held a PhD. Only two respondents were under the age of 40, with 43 under 50, and the majority (84)
over 50. The majority (118) had also been in an academic post for more than 10 years. One-hundred-and-ten had been
regular reviewers for more than 10 years and an average of 10.75 manuscripts were reviewed by the respondents each
year.
In order to investigate whether there were any differences between the types of research (critical vs. mainstream)
which researchers conducted, we divided the sample into critical and mainstream researchers. Initially, reviewers
were divided into three groups: mainstream only (reviews for journals such as The Accounting Review or Journal of
Accounting Research), mixed (reviews for mainstream journals and critical journals) and critical (reviewers for journals
such as Critical Perspectives on Accounting or Accounting, Auditing and Accountability Journal). This selection was
based on two questions where reviewers indicated the journals for which they most frequently acted as a reviewer and
the journals from whose perspective they have answered the questionnaire. There were 17 wholly critical reviewers, 32
mixed reviewers and 80 mainstream reviewers. A large proportion of reviewers therefore straddled the divide between
critical and mainstream research.

4 Late responders were more in favour where (i) sample data is unrepresentative of the population, (ii) the manuscript is based on secondary

data, (iii) the manuscript suggests a new theory but does not test it, (iv) the manuscript develops a new theory although not convincingly, (v) the
manuscript contains only analysis of secondary data, (vi) the manuscript is written in an area in which the reviewer has not previously published,
(vii) the manuscript is of publishable standard except for the literature review and (viii) the manuscript has previously been presented at more than
one research conference. Late responders were less in favour of a manuscript that typically uses appropriate complex statistics.
94 T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113

The highest number of reviews by an individual per annum was 75 and the lowest was one review. Seventy reviewers
reviewed less than 10 manuscripts per year, while 53 reviewed more than ten. Respondents described themselves as
ranging from moderately active researchers (58) through quite active (38) to very active (31), and as publishers ranging
from moderately successful (43) through mainly successful (51) to very successful (34). Nineteen of the respondents
were female and 110 were male. Overall, therefore, our average respondent was a US male professor with a professional
accounting qualification and PhD, who was over 50, had been a reviewer for more than 10 years and rated himself as
research active and successful. Clearly, we are dealing with experienced and knowledgeable reviewers—leaders in the
accounting discipline.
The only prior study of which we were aware in accounting was by Czyzewski and Dickinson (1990) (which in
turn was based on Kerr et al. (1977) in management and social science). We used this study and other non-accounting
studies together with the researchers’ own personal knowledge of the review process to compile the questionnaire. We
thus built on the prior research, but used our collective experience to develop a broad set of questions related to the
determinants of a successful accounting manuscript. We were careful to customise our questionnaire for the accounting
discipline, for example, by excluding areas of less importance to accounting (such as refereed conferences).
Our article extends the prior accounting study both in the scope and breadth of the questions asked (87 in this study,
but only 37 in Czyzewski and Dickinson). In fact, of the 87 questions we asked only 13 were identical to those in
Czyzewski and Dickinson and only seven were similar. In addition, in our study we rank, as well as rate, the manuscript
characteristics. The questionnaire was pre-tested on a small sample of UK accounting academics, several of whom
held editorial board memberships and all of who had acted as reviewers. Modifications were then made to clarify some
aspects of the questionnaire and to reduce repetition.
The respondents were questioned about the characteristics of the manuscript and their responses recorded on a
five point Likert scale. Respondents were asked to identify the particular journal from whose perspective they were
answering the questions. Respondents were asked if a particular characteristic would add or detract from the likelihood
of publishing the work on a scale from 1 (detract strongly), through 2 (detract partly), 3 (neutral), 4 (add partly) to
5 (add strongly). Respondents were asked to deal with all questions as completely separate. Mean responses and the
proportion of respondents along each point of the scale were calculated.
Prior research from organisational behaviour and other disciplines suggests that certain background characteristics
may influence editorial board members’ judgements: education (Crane, 1967), seniority and success (Cummings et al.,
1985), gender (Over, 1982), institutional affiliation and nationality. Education was tested using the presence/absence
of an MBA, PHD and a professional qualification. Seniority was evaluated using age, length of service and seniority.
Cummings et al. (1985), for example, cite some evidence from organisational behaviour that senior academics are
on the whole more likely to be less harsh critics than their less experienced counterparts. Success was tested using
self-rank as a publisher and researcher. We next tested gender and institutional affiliation two other factors that might
influence respondents’ judgements. For example, Over (1982) demonstrates that the research productivity of male
and female academic psychologists differs, with males producing a greater quantity but not quality of output. We
also tested nationality (US vs. UK). We compare the perceptions of only the two largest groups of academics as the
other groups, such as Australian or Canada, proved too small for testing. Nationality has been suggested by some
researchers as a barrier to publication, particularly in US journals (Brinn, Jones, & Pendlebury, 2001). We tested
whether the perspective of the journal which the reviewers adopted influenced the results. Overall, 29 different journals
were identified by reviewers. We tested the US against the non-US perspective. We also tested for differences between
those reviewers who were ‘critical’ researchers and those who were ‘mainstream’ researchers. We differentiated the
reviewers on the basis of both the journals for which they reviewed and the journal perspective adopted. Given the
nature of the statements asked and the underdevelopment of the prior literature we do not formally test any research
hypotheses for these variables.
A Kruskal–Wallis test was used to test whether these variables created differences in responses. We only record
in the tables the background factors: age, gender (sex), length of service in an academic post (LOS) number of years
spent as a reviewer (YAR), and nationality (NAT) where there were significant numbers of individual statements.5 As

5 We also investigated differences using the following background characteristics: post held; primary departmental allegiance; MBA; professionally

qualified; doctorally qualified; respondents self-rank as a researcher and the respondents self-reported success in publishing. These background
characteristics are not included in the tables as we found very few significant differences.
T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113 95

this was not the case for our partition between critical and mainstream we do not separately disclose the critical vs.
mainstream, results in the tables. However, we discuss them fully later.

4. Results

The majority of respondents chose three (the neutral position on the scale) in 44 of the 87 statements. However, in
many instances substantial minorities of respondents were influenced positively or negatively by individual factors.
In the majority of cases therefore, t-tests indicate significant differences between the mean response and the neutral
position. Only in 13 statements was the mean response insignificantly different from three. These statements are
recorded in bold in the tables.
We classified the 87 statements into eight categories (research topic; significance, type of data and investigation
used; theory and relationship to previous work; manuscript content; reviewer agreement and familiarity with research
area; manuscript size, form, presentation and standard; reviewer’s knowledge of the author; and dissemination of the
manuscript). We discuss below some of the major findings, but the reader is referred to the tables for the full detailed
analysis of the results. We divide our results into two parts. In the first part, we look at the aggregate results. This
provides a broad-brush analysis. We then seek to stratify the results so that we better reflect the differing background
characteristics of the reviewers and of differing nature of the journals reviewed.

4.1. Aggregate findings

Table 1 examines the first of these categories: the characteristics of the research topic that affect the acceptance
decision. This category contains four statements, in three of which a majority of respondents selected three. However,
the responses were widely dispersed so that the mean responses were all significantly different from three. The most
important factor was that a large minority (41.1%) was influenced favourably (mean 3.45) if the topic had not previously
been published in the journal.6
Table 2 examines the significance, type of data and investigation used for 20 statements. Overall, our respondents
believed that nine factors significantly enhanced the chances of publication, three factors were not significant and eight
significantly decreased the chances of publication.
The most significant factor enhancing publication was that the study produced mostly significant results (mean
3.98). Eighty-two percent of the respondents suggested this adds to the likelihood of acceptance. By contrast, mostly
insignificant results actively detracted from the likelihood of publication (mean 2.22) with 64.5% of respondents
suggesting this negatively affected a paper’s publishability. Case studies were regarded as acceptable (mean 3.64), but
generalisability was important. Whereas 60.4% of respondents were more likely to accept a generalisable case study,
68.2% of respondents were less likely to accept ungeneralisable case studies (mean 2.15). Innovative approaches were
valued with 59.4% viewing an approach which was not commonly used, but was appropriate, positively (mean of 3.46).
Appropriate statistical analysis was also important. Respondents were broadly indifferent between appropriate
simple (mean 3.53), or complex (mean 3.45) statistics. However, 31% of respondents were less likely to accept
univariate data analysis (mean 2.68). There was no great preference either for appropriate parametric (mean 3.41),
or non-parametric approaches (mean 3.33) or, indeed, appropriate mixed parametric and non-parametric techniques
(mean 3.44). There was no preference as to the type of data (ordinal or interval) or whether the study was interview or
survey-based.
Eight characteristics all detracted from publication. In two instances, our respondents felt particularly strongly.
First, unrepresentative samples were the most negative feature as far as reviewers were concerned (mean 1.52). A full
92.9% of the respondents indicated this detracts from the manuscript, with 57% believing this detracted strongly. This
was the second most negative of all 87 individual results. Second, an unaccredited replication of methodology was
also disliked strongly (mean 1.93) with 77.2% of reviewers less likely to accept. Purely argumentative papers with no
empirical data were also unpopular; 60.5% of respondents believing this would detract from publication (mean 2.24).
Shared databases with other papers (mean 2.72) and using secondary data (mean 2.69) were also viewed negatively by
a minority of respondents.

6 If the topic had previously been published in the journal, by contrast, the mean was 3.14. The reviewers thus attached a premium to originality—as
long, presumably, as this fell within the journals remit.
96
Table 1
The research topic

T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113


Detract strongly Detract partly Neutral Add partly Add strongly Mean S.D. JPU Age Sex LOS YAR Nat N
(%), 1 (%), 2 (%), 3 (%), 4 (%), 5

The research topic has not previously 0.8 3.9 54.3 31.8 9.3 3.45 0.75 0.08* 129
been published in the journal
The topic is primarily of interest to 1.6 7.8 51.2 31.8 7.8 3.36 0.80 129
academics
The topic is primarily of interest to 0.8 13.3 48.4 27.3 10.2 3.33 0.86 128
accounting practitioners, but is
treated in an academic way
The research topic has previously 3.1 10.1 59.7 24.0 3.1 3.14 0.76 0.07 129
been published in the journal

Notes: (1) A five-point Likert scale (where 1 = detract strongly, 2 = detract partly, 3 = neutral, 4 = add partly and 5 = add strongly) was used to ascertain whether a particular characteristic of a
manuscript would add or detract from the likelihood of publishing the work. (2) Figures in bold type are not significantly different from a mean of 3.0 using a t-test. All other mean values are
significantly different from 3.0 at 0.10 or better level of significance. (3) Six background factors of the respondents were tested to see if they affected the results (journal perspective used (JPU),
age, gender (sex), length of service in an academic post (LOS), number of years spent as a reviewer (YAR) and nationality (Nat)) using a Kruskal–Wallis test. Significant differences only are
recorded and the level of significance given in the table. (4) An asterisk means that non-US perspective reviewers, older, male, longer serving, more years as a reviewer and UK reviewers scored
significantly greater median ranks than US perspective, younger, female, shorter-serving, fewer years as a reviewer and US reviewers.
Table 2
Significance, type of data and investigation used
Detract strongly Detract partly Neutral Add partly Add strongly Mean S.D. JPU Age Sex LOS YAR Nat N
(%), 1 (%), 2 (%), 3 (%), 4 (%), 5
The results of the study are mostly significant 0.0 0.8 17.1 65.1 17.1 3.98 0.61 0.01 0.07 129
The manuscript is based on case study type data and 0.8 9.3 29.5 45.7 14.7 3.64 0.87 0.09* 0.01* 129
appropriate analysis which are readily generalised
The manuscript typically uses appropriate simple statistics 0.0 4.7 44.2 44.2 7.0 3.53 0.70 0.01 0.01 129
The manuscript is based on a type of approach which is 0.0 2.3 38.3 50.8 8.6 3.46 0.67 0.07 0.10* 128
appropriate but not commonly used in the area of
investigation
The manuscript typically uses appropriate complex statistics 1.6 7.8 44.2 37.2 9.3 3.45 0.83 0.03 129
The data are analysed using appropriate mixed parametric 0.0 0.8 61.4 30.7 7.1 3.44 0.64 0.10 0.10 127
and non-parametric techniques

T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113


The data are analysed using appropriate parametric 0.0 0.8 63.8 29.1 6.3 3.41 0.62 127
techniques
The data are analysed using appropriate non-parametric 0.8 2.4 66.1 24.4 6.3 3.33 0.67 0.06 0.10* 127
techniques
The manuscript predominantly is based on ordinal data 0.0 4.7 85.0 7.9 2.4 3.08 0.46 127
The manuscript predominantly is based on interval data 0.0 4.7 86.7 7.8 0.8 3.05 0.39 128
The manuscript is based on interview data 2.3 16.3 64.3 15.5 1.6 2.98 0.69 0.01* 129
The manuscript is based on survey data 1.6 15.5 69.8 13.2 0.0 2.90 0.59 129
The manuscript shares a common database with another 2.3 29.5 62.0 6.2 0.0 2.72 0.61 0.00* 0.00* 129
previously published manuscript
The manuscript is based on secondary data 0.8 34.4 60.0 4.8 0.0 2.69 0.57 0.10 0.05* 125
The manuscript uses a primarily univariate data analysis 5.6 25.4 64.3 4.8 0.0 2.68 0.65 0.01* 0.07* 126
The manuscript is purely argumentative in nature 19.4 41.1 35.7 3.9 0.0 2.24 0.81 0.04* 0.09* 129
containing no empirical data
The results of the study are mostly insignificant 17.3 47.2 31.5 3.9 0.0 2.22 0.78 0.06* 0.02* 0.03* 127
The manuscript is based on case study type data and 20.9 47.3 27.9 3.9 0.0 2.15 0.79 0.03* 129
appropriate analysis which are not readily generalized
The manuscript uses an unaccredited replication of 33.6 43.6 19.1 3.6 0.0 1.93 0.82 0.04 110
methodology
The sample data is unrepresentative of the population 57.0 35.9 5.5 0.8 0.8 1.52 0.71 0.05* 128
Notes: (1) A five-point Likert scale (where 1 = detract strongly, 2 = detract partly, 3 = neutral, 4 = add partly and 5 = add strongly) was used to ascertain whether a particular characteristic of a
manuscript would add or detract from the likelihood of publishing the work. (2) Figures in bold type are not significantly different from a mean of 3.0 using a t-test. All other mean values are
significantly different from 3.0 at 0.10 or better level of significance. (3) Six background factors of the respondents were tested to see if they affected the results (journal perspective used (JPU),
age, gender (sex), length of service in an academic post (LOS), number of years spent as a reviewer (YAR) and nationality (Nat)) using a Kruskal–Wallis test. Significant differences only are
recorded and the level of significance given in the table. (4) An asterisk means that non-US perspective reviewers, older, male, longer serving, more years as a reviewer and UK reviewers scored
significantly greater median ranks than US perspective, younger, female, shorter-serving, fewer years as a reviewer and US reviewers.

97
98 T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113

Table 3 summarises the responses to the 15 statements about the theory and the relationship of the manuscript to
previous work. Of these 15 statements, six significantly enhanced the chances of publication, one was non-significant
and eight significantly detracted from publication.
The two statements to which the respondents were most positively inclined both involved applying established
theories to a new area. This was particularly true for the application of an established non-accounting theory to an
accounting problem (mean 3.97). However, this was followed closely (mean 3.93) by attempts to apply an established
accounting theory to a new area. Over 80% believed in each case that this would enhance publishability. Even sug-
gesting a possible new theory was viewed positively (mean 3.33), with 42.6% of reviewers stating this would enhance
publishability. Original theory can also offset some of the negative views on statistical insignificance. Interestingly, if
a study’s results were not statistically significant, but interested the reviewer (mean 3.49), this significantly enhanced
publishability. A caveat was that unconvincing theory detracted from publication (mean 2.06). Respondents positively
viewed referencing previous work from the journal (mean 3.42). Conversely, if no references were made, this was
viewed negatively (mean 2.70).
Just as original work was viewed positively, unoriginal work was viewed negatively. Replication of previously
published theoretical work without any extension detracted from publication (mean 1.91) for 72.7% of reviewers.
Using an in-favour methodological approach partially compensated for a relatively little contribution (mean 2.49), but
even then 47.7% of reviewers viewed this negatively. Moreover, new data with non-original findings (mean 2.51) and
‘think pieces’ containing no new data were both viewed negatively (mean 2.78). Another aspect of unoriginality was
overlapping work. Overlapping other published refereed work was highly unacceptable (mean 2.19), with 70.4% of
respondents more likely to reject. Overlapping work already in the review process (mean 2.39) was also perceived
negatively.
Table 4 summarises responses to statements about manuscript content. All seven statements were rated as sig-
nificantly different from three: four positively enhancing the chances of publication and three negatively affecting
publication. Originality again emerged as the key issue. If the manuscript differed from articles normally appearing
in the journal in its approach (mean 3.54), analysis (mean 3.54) or content (mean 3.46) the majority of respondents
were favourable. Manuscripts which were polemic in nature (mean 2.47), or based on secondary data (mean 2.50)
were viewed negatively by a large proportion of reviewers (45.5% and 48%, respectively). Unsolicited reviews also
attracted negative responses (mean 2.56), although less strongly.
Table 5 summarises responses to 13 statements on reviewer agreement and familiarity with research area. Ten
statements were rated significantly different from three: four positively enhancing the chances of publication and six
negatively impacting on publication success. Three results were neutral and non-significant.
In general, the respondents were influenced by their familiarity or experience in the area. This was reflected in
four matching statements. First, familiarity with the approach used enhanced publication chances (mean 3.37), but
unfamiliarity reduced the likelihood of publication (mean 2.92). Second, if the manuscript was based on a theory with
which the reviewer agreed this enhanced the likelihood of publication (mean 3.36); but if the reviewer disagreed, this
detracted (mean 2.70). Third, if the reviewer had expertise in the area this enhanced publication (3.35), but if the
reviewer had no expertise, this detracted (mean 2.77). Finally, if the reviewer has published in the area, this enhanced
the likelihood of publication (mean 3.27).
Inaccessibility was the key negative factor identified by respondents (mean 2.09) with 79.1% of reviewers believing
that a manuscript’s inaccessibility to those unfamiliar with the theory would detract from publication. A manuscript
was viewed negatively by 73.5% if they believed the theory to be over-emphasised (mean 2.27). A theory dominated
by statistical method (mean 2.52) was also viewed negatively by a substantial number of reviewers.
Table 6 summarises responses about 18 specific negative characteristics of the manuscript that affect publication.
All 18 statements were rated as significantly detracting from an article’s publishability. Moreover, these statements
provoked the strongest negative reaction amongst the eight categories of factors surveyed.
The standard of the writing appeared crucial. The manuscript being understandable but not well written (mean 1.74)
was viewed unfavourably by 94.6%. Even an interesting manuscript that was poorly written (mean 1.71) was more
likely to be rejected by 92.3% of respondents. Minor grammatical or construction errors (mean 2.19) were also viewed
negatively with 70.5% of respondents more likely to reject. This was confirmed by specific questions on the English
and the grammar. If a manuscript was of a publishable standard, except for substandard English, 84.7% of respondents
(mean 2.02) were more likely to reject. If the grammar alone was substandard 84.8% of respondents (mean 2.00) were
more likely to reject.
Table 3
Theory and relationship to previous work
Detract strongly Detract partly Neutral Add partly Add strongly Mean S.D. JPU Age Sex LOS YAR Nat N
(%), 1 (%), 2 (%), 3 (%), 4 (%), 5
The manuscript attempts to apply an established 0.8 1.6 14.8 65.6 17.2 3.97 .68 .04 .10 128
theory from a non-accounting academic area to
an accounting topic
The manuscript attempts to apply an established 0.0 0.8 18.0 68.8 12.5 3.93 .58 128
accounting theory to a new area
The study does not yield results which approach 1.6 12.5 28.9 49.2 7.8 3.49 .87 128
statistical significance but the theory is one that
interests you
The manuscript contains references to previous 0.0 0.8 57.0 41.4 0.8 3.42 .53 .01* .07 128
work published in the journal

T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113


The manuscript suggests a new theory but does not 0.8 14.7 41.9 36.4 6.2 3.33 .83 .09* 129
test it
The study does not yield results which approach 3.9 17.8 28.7 42.6 7.0 3.31 .97 .08* 129
statistical significance but the theory is the
author’s own and is original
The manuscript is a comment on previous work 3.9 11.7 60.9 22.7 0.8 3.05 .73 128
published in the journal
The manuscript is a ‘think piece’ and contains no 6.3 27.6 51.2 11.8 3.1 2.78 .85 127
new data
The manuscript contains no references to previous 4.7 21.1 74.2 0.0 0.0 2.70 .55 128
work published in the journal
The manuscript contains new data, but the findings 9.4 40.2 40.2 10.2 0.0 2.51 .81 .03* 127
are not original
The manuscript contributes relatively little new 13.3 34.4 42.2 10.2 0.0 2.49 .85 128
work but uses a methodological approach which
the journal editor encourages
The manuscript overlaps another which you have 11.2 41.6 44.8 1.6 0.8 2.39 .74 .01 125
earlier been sent for review
The manuscript overlaps another published 14.4 56.0 26.4 2.4 0.8 2.19 .74 .01 125
refereed paper which you have read
The manuscript develops a new theory although not 26.4 51.2 12.4 10.1 0.0 2.06 .89 129
convincingly
The manuscript replicates previous theoretical 41.4 31.3 22.7 4.7 0.0 1.91 .91 .01 128
work that has been published in the journal and
adds no new theory or extensions
Notes: (1) A five-point Likert scale (where 1 = detract strongly, 2 = detract partly, 3 = neutral, 4 = add partly and 5 = add strongly) was used to ascertain whether a particular characteristic of a manuscript would add or detract
from the likelihood of publishing the work. (2) Figures in bold type are not significantly different from a mean of 3.0 using a t-test. All other mean values are significantly different from 3.0 at 0.10 or better level of significance.
(3) Six background factors of the respondents were tested to see if they affected the results (journal perspective used (JPU), age, gender (sex), length of service in an academic post (LOS), number of years spent as a reviewer
(YAR) and nationality (Nat)) using a Kruskal–Wallis test. Significant differences only are recorded and the level of significance given in the table. (4) An asterisk means that non-US perspective reviewers, older, male, longer
serving, more years as a reviewer and UK reviewers scored significantly greater median ranks than US perspective, younger, female, shorter-serving, fewer years as a reviewer and US reviewers.

99
100
Table 4
Manuscript content
Detract strongly Detract partly Neutral Add partly Add strongly Mean S.D. JPU Age Sex LOS YAR Nat N
(%), 1 (%), 2 (%), 3 (%), 4 (%), 5

The manuscript is on a topic of interest in the area but differs from articles traditionally published in the journal in

T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113


Approach 0.8 7.1 35.4 50.4 6.3 3.54 .75 127
Analysis 0.8 7.1 35.4 50.4 6.3 3.54 .75 127
Content 0.8 9.5 39.7 42.9 7.1 3.46 .80 126
The manuscript is a meta analysis of other 0.8 14.3 59.5 22.2 3.2 3.13 .72 126
results
The manuscript is an unsolicited review 13.6 21.6 60.0 4.8 0.0 2.56 .79 .06 125
The manuscript contains only analysis of 4.1 43.9 50.4 1.6 0.0 2.50 .61 .03* 123
secondary data
The manuscript is polemic in nature 14.9 30.6 47.9 5.8 0.8 2.47 .85 .04* 121

Notes: (1) A five-point Likert scale (where 1 = detract strongly, 2 = detract partly, 3 = neutral, 4 = add partly and 5 = add strongly) was used to ascertain whether a particular characteristic of a
manuscript would add or detract from the likelihood of publishing the work. (2) Figures in bold type are not significantly different from a mean of 3.0 using a t-test. All other mean values are
significantly different from 3.0 at 0.10 or better level of significance. (3) Six background factors of the respondents were tested to see if they affected the results (journal perspective used (JPU),
age, gender (sex), length of service in an academic post (LOS), number of years spent as a reviewer (YAR) and nationality (Nat)) using a Kruskal–Wallis test. Significant differences only are
recorded and the level of significance given in the table. (4) An asterisk means that non-US perspective reviewers, older, male, longer serving, more years as a reviewer and UK reviewers scored
significantly greater median ranks than US perspective, younger, female, shorter-serving, fewer years as a reviewer and US reviewers.
Table 5
Reviewer agreement and familiarity with research area
Detract strongly Detract partly Neutral Add partly Add strongly Mean S.D. JPU Age Sex LOS YAR Nat N
(%), 1 (%), 2 (%), 3 (%), 4 (%), 5

The manuscript uses a type of approach with 0.0 0.0 64.3 34.1 1.6 3.37 .52 129
which you are familiar
The manuscript is based on a theory with 0.0 1.6 60.9 37.5 0.0 3.36 .51 128
which you agree
The manuscript is in an area in which you 0.0 4.7 58.9 33.3 3.1 3.35 .62 .13 .08* .07* .03 129
have expertise
The manuscript is written in an area in which 0.0 2.3 69.0 27.9 0.8 3.27 .51 .08 .06* 129
you have previously published
The manuscript uses statistical techniques 0.0 5.7 87.0 7.3 0.0 3.02 .36 .02 .01 .10 123

T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113


which are valid but with which you are not
familiar
The manuscript is based on a theory with 0.0 4.7 90.6 4.7 0.0 3.00 .31 128
which you are indifferent
The manuscript is written in an area in which 0.0 4.7 92.2 3.1 0.0 2.98 .28 .05* .08 .05 .03 129
you have not previously published
The manuscript uses a type of approach with 0.0 13.5 81.0 5.6 0.0 2.92 .43 126
which you are not familiar
The manuscript is in an area in which you 4.6 14.7 79.8 0.9 0.0 2.77 .54 .08* 109
have no expertise
The manuscript is based on a theory with 2.3 30.5 61.7 5.5 0.0 2.70 .61 .03* .05* .07* 128
which you do not agree
The work is based on a theory which you 6.4 39.2 50.4 4.0 0.0 2.52 .68 .00* 125
consider is becoming dominated by
statistical method
The work is based on a theory, the 0.8 72.7 25.0 1.6 0.0 2.27 .50 .03* 128
importance of which you believe to be
over-emphasised
The manuscript is written in a way which 13.2 65.9 20.2 0.8 0.0 2.09 .60 129
you feel is not easily accessible to those
unfamiliar with the particular theory

Notes: (1) A five-point Likert scale (where 1 = detract strongly, 2 = detract partly, 3 = neutral, 4 = add partly and 5 = add strongly) was used to ascertain whether a particular characteristic of a
manuscript would add or detract from the likelihood of publishing the work. (2) Figures in bold type are not significantly different from a mean of 3.0 using a t-test. All other mean values are
significantly different from 3.0 at 0.10 or better level of significance. (3) Six background factors of the respondents were tested to see if they affected the results (journal perspective used (JPU),
age, gender (sex), length of service in an academic post (LOS), number of years spent as a reviewer (YAR) and nationality (Nat)) using a Kruskal–Wallis test. Significant differences only are
recorded and the level of significance given in the table. (4) An asterisk means that non-US perspective reviewers, older, male, longer serving, more years as a reviewer and UK reviewers scored
significantly greater median ranks than US perspective, younger, female, shorter-serving, fewer years as a reviewer and US reviewers.

101
102
Table 6
Manuscript size, form, presentation and standard
Detract strongly Detract partly Neutral Add partly Add strongly Mean S.D. JPU Age Sex LOS YAR Nat N
(%), 1 (%), 2 (%), 3 (%), 4 (%), 5

The manuscript is considerably shorter than 3.1 26.4 60.5 8.5 1.6 2.79 0.70 0.07* 129
smaller articles that appear in the journal
but cannot be considered as a research
note
The manuscript is not in the form specified 0.8 33.6 60.9 4.7 0.0 2.70 0.57 0.01 128
in the journal style but is nevertheless
interesting
The manuscript contains a number of minor 12.4 58.1 27.9 1.6 0.0 2.19 0.66 0.08 129
grammatical or construction errors

T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113


The manuscript is considerably longer than 12.4 66.7 20.9 0.0 0.0 2.09 0.57 0.02 0.01 0.03 0.06 129
full size articles normally appearing in the
journal
The manuscript is understandable but is not 31.8 62.8 4.7 0.8 0.0 1.74 0.58 0.08* 129
well written
The manuscript is interesting but is poorly 38.8 53.5 6.2 1.6 0.0 1.71 0.65 129
written
The manuscript is of a publishable standard
except for the following which is below
publishable standard
Abstract 1.6 40.0 58.4 0.0 0.0 2.57 0.53 125
Appendices 0.8 58.1 41.1 0.0 0.0 2.40 0.51 0.03 124
References 6.5 54.8 37.1 1.6 0.0 2.34 0.62 0.04* 124
Tables 4.0 64.8 31.2 0.0 0.0 2.27 0.53 125
Introduction 6.4 68.8 24.8 0.0 0.0 2.18 0.53 0.04* 125
English 13.7 71.0 14.5 0.8 0.0 2.02 0.56 124
Grammar 15.2 69.6 15.2 0.0 0.0 2.00 0.55 0.09 125
Conclusion 19.2 66.4 12.8 1.6 0.0 1.97 0.62 125
Discussion 23.2 66.4 8.8 0.8 0.8 1.90 0.65 0.07 125
Literature review 25.6 60.8 12.8 0.8 0.0 1.89 0.64 125
Statistics 41.9 50.8 7.3 0.0 0.0 1.65 0.61 124
Method 57.3 37.1 4.8 0.8 0.0 1.49 0.63 124

Notes: (1) A five-point Likert scale (where 1 = detract strongly, 2 = detract partly, 3 = neutral, 4 = add partly and 5 = add strongly) was used to ascertain whether a particular characteristic of a
manuscript would add or detract from the likelihood of publishing the work. (2) Figures in bold type are not significantly different from a mean of 3.0 using a t-test. All other mean values are
significantly different from 3.0 at 0.10 or better level of significance. (3) Six background factors of the respondents were tested to see if they affected the results (journal perspective used (JPU),
age, gender (sex), length of service in an academic post (LOS), number of years spent as a reviewer (YAR) and nationality (Nat)) using a Kruskal–Wallis test. Significant differences only are
recorded and the level of significance given in the table. (4) An asterisk means that non-US perspective reviewers, older, male, longer serving, more years as a reviewer and UK reviewers scored
significantly greater median ranks than US perspective, younger, female, shorter-serving, fewer years as a reviewer and US reviewers.
T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113 103

The size of a journal article and house format was also important. Excessive length (mean 2.09) was an important
impediment to manuscript publication and articles not in house format (mean 2.70) also detracted. Interestingly, none
of these factors seems key to the quality of the research and all are potentially easily correctable.
In general, a particular part of the manuscript being below publishable standard markedly reduced the chances of
acceptance but, in particular, the method (mean 1.49), flaws with the statistics (mean 1.65), the quality of the literature
review (mean 1.89), the discussion (mean 1.90) and the conclusion (mean 1.97) were all individually very important.
More peripheral areas such as the abstract (mean 2.57), the appendices (mean 2.40), the references (mean 2.34), the
tables (mean 2.27) and the introduction (mean 2.18), were viewed slightly less negatively. Overall, however, if any
one part of a manuscript was below publishable standard this would seriously reduce the likelihood of successful
publication.
Table 7 summarises the responses to eight statements about reviewer’s knowledge of the author. Out of the eight
categories surveyed, these statements provided the most moderate responses. This may well be because of the potential
nature of the responses. Reviewers may not wish to admit to bias or favouritism and their opinions may not reflect
their actual practice. In six cases, the mean response was not significantly different from three. Only in two cases had
statements significantly enhanced the possibility of publication. First, knowing the author’s identity and the author’s
strong reputation in the area added to the likelihood of acceptance (mean 3.34). Second, a belief that the journal editor
has solicited the manuscript was also viewed positively (mean 3.19, 24.9% more likely to accept). Overall, however,
our reviewers perceived their personal knowledge of the author was unlikely to affect their review decision.
Table 8 concerns the issue of a manuscript’s prior dissemination at research conferences. Both statements mildly
enhanced the chances of publication. This may reflect the belief that prior dissemination exposes papers to peer appraisal
and criticism. Such appraisal and criticism might normally be expected to enhance a paper’s publishability.

4.2. Stratified findings

We next examined the results to see how, if at all, the background factors affected the responses. The first point to
note was that there was great consistency across these factors.7 Where the responses did differ or where substantial
minorities had different views this was generally not associated with any obvious background factor. We tested the
background factors using a Kruskal–Wallis test. All significant differences greater than 0.10 are presented in the tables.8
We also conducted a multivariate logit analysis. The findings were generally similar using both analyses. We thus report
only the Kruskal–Wallis test results in this paper.9 Below we discuss only those differences which appear systematically
linked to the background characteristics or journal characteristics. We limit our discussion to those background factors
where there were significant numbers of differences i.e., journal perspective adopted (19 significant differences), age
(14 significant differences), gender (16 significant differences), length of service (13 significant differences), and length
of time spent as a reviewer (13 significant differences) and nationality (15 significant differences).
A key determinant of the respondent’s response was the journal perspective identified (see Appendix A). In total,
only 72 out of 129 respondents answered this question from the perspective of 30 journals. The perspectives adopted
were fragmented with only four journals occurring more than four times: Issues in Accounting Education (eight times),
The Accounting Review (seven times), Journal of Business Finance and Accounting (six times) and Auditing: A Journal
of Practice and Theory (four times). We used this question, and the journals reviewed, as a basis for splitting the sample
into critical vs. mainstream perspectives. We also investigated the issue of whether respondents using a US journal
perspective had significantly different views from those adopting a non-US journal perspective.
Nineteen significant differences were observed for journal perspective used (JPU) more than from any other
background characteristic. Interestingly, 14 of these differences were clustered in two tables: Table 2 (10 signifi-
cant differences on data type and investigation used) and Table 7 (four significant differences on author knowledge).
In Table 2, the journal perspective used affected three major areas: significance, type of statistics used, and type of

7 This homogeneity in background characteristics is consistent with Gilliland and Cortina’s (1997) finding from psychology. They found few
differences in background characteristics when they looked at original submissions to the Journal of Applied Psychology.
8 The standard deviations of the responses to some statements are so low as to indicate virtual uniformity in the responses. Accordingly no

sub-analysis by background characteristics is conducted on those statements where the standard deviation is below 0.25.
9 In addition, we carried out a principal components analysis. The results did not suggest any strong components emerging. We, therefore, do not

report the results here.


104
Table 7
Reviewer’s knowledge of the author
Detract strongly Detract partly Neutral Add partly Add strongly Mean S.D. JPU Age Sex LOS YAR Nat N
(%), 1 (%), 2 (%), 3 (%), 4 (%), 5

You know the probable identity of 0.0 0.8 65.6 32.8 0.8 3.34 0.51 0.06 0.02 128
the author(s) and the author(s) has
a strong reputation in the area
The manuscript has been solicited, 2.3 2.3 70.5 23.3 1.6 3.19 0.61 129
you believe, by the journal editor
You have previously commented on a 1.7 9.2 69.7 19.3 0.0 3.07 0.59 0.05 119
version of the paper for the
author(s)

T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113


You know the probable identity of 0.8 0.8 91.3 7.1 0.0 3.05 0.33 127
the author(s) and the author(s) is a
member of the editorial board
The manuscript is one which is 0.0 2.3 95.3 2.3 0.0 3.00 0.22 0.09 0.05 129
written, you believe, by at least
one UK author
The manuscript is one which is 0.0 1.6 98.4 0.0 0.0 2.98 0.12 0.09* 0.05* 129
written, you believe, by at least
one US author
You know the probable identity of the 0.8 4.7 94.5 0.0 0.0 2.94 0.27 128
author(s) and the author(s) has no
reputation in the area written about
You know the probable identity of 0.0 14.1 81.3 2.3 2.3 2.93 0.50 0.012* 0.02* 128
the author(s) and the author(s) has
been critical of your research in
the past

Notes: (1) A five-point Likert scale (where 1 = detract strongly, 2 = detract partly, 3 = neutral, 4 = add partly and 5 = add strongly) was used to ascertain whether a particular characteristic of a
manuscript would add or detract from the likelihood of publishing the work. (2) Figures in bold type are not significantly different from a mean of 3.0 using a t-test. All other mean values are
significantly different from 3.0 at 0.10 or better level of significance. (3) Six background factors of the respondents were tested to see if they affected the results (journal perspective used (JPU),
age, gender (sex), length of service in an academic post (LOS), number of years spent as a reviewer (YAR) and nationality (Nat)) using a Kruskal–Wallis test. Significant differences only are
recorded and the level of significance given in the table. (4) An asterisk means that non-US perspective reviewers, older, male, longer serving, more years as a reviewer and UK reviewers scored
significantly greater median ranks than US perspective, younger, female, shorter-serving, fewer years as a reviewer and US reviewers.
T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113
Table 8
Dissemination of the manuscript
Detract strongly Detract partly Neutral Add partly Add strongly Mean S.D. JPU Age Sex LOS YAR Nat N
(%), 1 (%), 2 (%), 3 (%), 4 (%), 5

The manuscript has previously been 0.0 6.3 60.2 33.6 0.0 3.27 .57 .03 .06 128
presented at a national research
conference
The manuscript has previously been 2.3 8.6 60.2 27.3 1.6 3.17 .70 .04 .07 128
presented at more than one
research conference

Notes: (1) A five-point Likert scale (where 1 = detract strongly, 2 = detract partly, 3 = neutral, 4 = add partly and 5 = add strongly) was used to ascertain whether a particular characteristic of a
manuscript would add or detract from the likelihood of publishing the work. (2) Figures in bold type are not significantly different from a mean of 3.0 using a t-test. All other mean values are
significantly different from 3.0 at 0.10 or better level of significance. (3) Six background factors of the respondents were tested to see if they affected the results (journal perspective used (JPU),
age, gender (sex), length of service in an academic post (LOS), number of years spent as a reviewer (YAR) and nationality (Nat)) using a Kruskal–Wallis test. Significant differences only are
recorded and the level of significance given in the table. (4) An asterisk means that non-US perspective reviewers, older, male, longer serving, more years as a reviewer and UK reviewers scored
significantly greater median ranks than US perspective, younger, female, shorter-serving, fewer years as a reviewer and US reviewers.

105
106 T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113

investigation. First, reviewers answering from the perspective of US journals were much less tolerant of non-significant
data. Significant data greatly enhanced the prospects of likely publication. Second, reviewers from a US journal per-
spective were much keener on the use of appropriate statistics or mixed parametric and non-parametric statistics. And,
finally, non-US academics were kinder towards appropriate case study data and to data which is unrepresentative of
the population. On Table 7, some differences were identifiable based on author knowledge. First, there appears to be
a mild ‘national’ influence. Those answering from a US journal perspective were marginally influenced in favour of
publications by US authorship, whereas the reverse was true for UK authorship. Second, prior comments on a version
of a paper were viewed as enhancing the likelihood of publication by non-US authors. Finally, the knowledge of the
probable authors’ identity and their being critical of the reviewer’s research was more likely to be a negative factor for
non-US reviewers.
Interestingly, and counter to expectations, we found little difference between critical and mainstream researchers.
Using the threefold classification (critical research only; mainstream research only; and mixed research—both critical
and mainstream) we found only three significant differences. When a broader partition was undertaken: critical (i.e.,
both critical and mixed), six significant results were found.10 In all six cases, the critical researchers were harsher than
the mainstream researchers. Of particular importance was the strongest finding on case study data (0.007). Critical
researchers were much more critical on the use of case study data which could not be generalised. Although they were
harsher on these six significant results, overall critical researchers were more lenient than mainstream researchers. In
52 out of 83 responses, critical researchers were more likely to accept a manuscript than mainstream researchers when
judged against the selection criteria.
On age, there was some evidence that younger reviewers (under 50) were slightly less liberal in some areas than
older ones (over 50). Examining the median responses of both groups, younger reviewers were less ready to accept case
study data than older reviewers and were also influenced less by references to previous work published in the journal.
Younger reviewers were more hostile than older reviewers to unoriginal findings even when based on new data. The
only area in which older reviewers seemed measurably less liberal was they were less likely to accept manuscripts that
were polemic in nature. This may reflect a wariness (developed with experience) towards polemic arguments.
If age makes for slightly more liberal views there was some evidence that females were stricter. Female reviewers
were less willing than males to allow an original theory to offset lack of statistical results or to tolerate replications
that added no new theory or extensions. Female reviewers were also more hostile towards theories with which they did
not agree and theories dominated by statistical methods. The only area where female reviewers were more liberal was
an increased likelihood of accepting work from authors whose identity was known and who had a strong reputation in
the area.
Those with less service and or length of time in an academic post were less tolerant in certain areas than those
with greater service and or length of time, although not universally so. Length of service was correlated with age.11
However, the precise areas of difference were not the same. However, both those reviewers with less service (less-
service reviewers) or with less time (less-time reviewers) as an academic were favourably influenced by the manuscript’s
having been presented at research conferences, whereas those with longer service or less time were neutral. Where the
manuscript shared a common database with another previously published those with less service were less tolerant.
Less-service reviewers were also less tolerant towards primarily univariate data analysis. Those with less than 10 years
service were also more likely to reject a theory with which they did not agree. However, by contrast, those with less
service were more tolerant of manuscript length, and minor grammatical and construction errors. If the reviewer did
not agree with the theory it was less-time reviewers who were less tolerant. Reference to previous work in the journal

10 These six results were (i) the topic is primarily of interest to accounting practitioners, but is treated in an academic way; (ii) the manuscripts is

based on case study data but is not readily generalised; (iii) the manuscript is on a topic of interest, but different from articles traditionally published
in the journal in terms of content; (iv) the manuscript is written in a way which you feel is not accessible to those unfamiliar with the particular
theory; (v) the manuscript is considerably shorter than small articles, but cannot be considered as a research note; (vi) the manuscript has been
solicited, you believe, by the journal editor.
11 We calculated correlation coefficients for the five background factors. Age is positively correlated with length of service (0.338), years as reviewer

(0.235) and nationality (0.006), and negatively correlated with sex (−0.217). Sex is positively correlated with nationality (0.103) and is negatively
correlated with length of service (−0.161) and years as reviewer (−0.215). Length of service is positively correlated with years as reviewer (0.288)
and negatively correlated with nationality (−0.024). Nationality is negatively correlated with years as reviewer (−0.59) and positively correlated
with journal perspective used (0.538). All correlations are significant at the 0.10 level or higher, except for those between nationality and the other
background factors, none of which is significant.
T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113 107

was viewed more positively by less-time reviewers than by more-time reviewers. Finally, less-time reviewers were also
inclined to be neutral towards previous presentation of the manuscript at conferences whereas more-time reviewers
viewed it positively.
The UK and US respondents’ perceptions differed across a range of characteristics. The most and strongest differ-
ences were found in Table 2 attitudes to types of research. Although three of these differences confirmed the findings
from the journal perspective used, three were different—the ready generalisation of case study data, interview data,
and unaccredited replication of methodology. UK academics were more tolerant of papers based on non-generalisable
case study material or on interview data. These tie in with the general perception that UK academics are less driven
by quantitative methodologies than US academics. However, UK academics were less tolerant of papers which over-
lapped with prior refereed papers or manuscripts they had reviewed earlier. UK respondents were also less likely to
reject a study’s results if they were mostly insignificant. However, they were more likely to reject if there was an
unaccredited replication of methodology or if the manuscript was considerably longer than the average for the jour-
nal. UK respondents were also less likely than US respondents to find the appropriate use of mixed parametric and
non-parametric techniques or of simple statistics enhanced publication. Finally, expertise and author knowledge were
viewed differently by UK and US academics. Lack of expertise made UK academics more likely to reject articles than
US academics, while expertise in the area made them more likely to accept articles. Finally, UK academics were more
tolerant of authors who had been critical of their research in the past.
Since the background characteristics of our respondents did not systematically influence their opinions, we treat
our findings in the discussion below as broadly homogeneous.

5. Discussion

Several issues are highlighted by these results. Across a disparate group of reviewers representing a range of
journals and nationalities there was a high degree of consensus as to which issues affect publishability. There were, for
example, very few differences between critical and mainstream researchers. The standard deviation of the responses
did not, for instance, exceed one for any of the statements. In general, there was a high degree of consistency in
perceptions of which factors were important in the reviewing process, at least in relation to the factors assessed in this
study.
While the overall pattern suggested consensus, it is worth pointing out that on certain specific manuscript charac-
teristics reviewers did express divergent opinions. The responses to 15 statements had standard deviations above 0.8.
It is noteworthy that these were highly concentrated with 8 of the 15 appearing on Table 3 (relationship to previous
work). In particular, divergence tended to revolve around the importance of lack of statistical significance, whether the
manuscript contained theory, the originality of the findings and think pieces containing no new data. If consistency
across reviewers is necessary for an effective review process, then it was lacking in these areas. The publishability of
manuscripts with these characteristics may, at least partially, depend on the choice of reviewer.
As might be expected, certain factors impacted upon publishability more than others. The four categories that
respondents found most important were manuscript characteristics, manuscript content, relationship to previous work
and content. The majority of respondents considered these characteristics of the manuscript important for publishability.
By contrast, research topic, reviewer opinion, author knowledge and dissemination of the manuscript were considered
less important. Interestingly, the more important factors almost all related to the inherent characteristics of the work.
From our study, we can distil certain clear dos and don’ts which will enhance or detract from the likelihood of
publication. In general, the respondents felt more strongly about those factors detracting from publication than about
those enhancing publication. Only eight statements scored a mean of over 3.5. By contrast, in 29 cases a mean of
less than 2.5 was scored. This clearly underpins the competitive nature of the research process where rejections,
especially for leading journals, are much more common than acceptances (Cummings et al., 1985; Pondy, 1995). In
such circumstances, the review becomes a pre-publication filter, ‘a veto process separating wheat from chaff’ (Rousseau,
1995, p. 187).
Four broad factors clearly enhanced the chances of acceptance. First, the statistical significance of the results is vital.
Its presence enhances a manuscript; its absence actively damages it. This result was perceived particularly strongly
by those reviewers answering from a US journal perspective. This confirms the findings of Czyzewski and Dickinson
(1990) and Kerr et al. (1977). However, it refutes Campbell’s (1982, p. 693) personal opinion that ‘negative results
were not a particularly important reason for rejection’ in the Journal of Applied Psychology. Interestingly, Schneider
108 T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113

expresses his personal opinion that the real problem is not insignificant results, but insignificant research. ‘In other
words statistical non-significance is only significant when the research effort itself is insignificant’ (Schneider, 1995,
p. 244). There is, indeed, some support for Schneider in the findings that studies which do not approach statistical
significance but are interesting to the reviewer or are original are more likely to be published. Second, originality is
also vitally important. Originality in any aspect of the work enhances it, whether in approach, analysis or content. This
confirms Daft et al. (1987) finding that organisational scholars rate original research. It is, of course, true that originality
can be an inherently subjective criterion. This agrees with Reiter and Williams (2002) who argue that innovation, a
crucial ingredient of originality, is essential for progress in accounting research. Unfortunately, Reiter and Williams see
innovation as generally lacking in accounting research. Third, the use of appropriate statistical methods is important.
Indeed, appropriate use of methods rather than complexity per se appears to be most important. Appropriate statistical
methods were of particular concern to reviewers answering from a US journal perspective. Fourth, the application of
theory is valued by reviewers, whether this is established theory applied to a new problem or new (non-accounting)
theory applied to an accounting problem. This echoes Daft et al. (1987) finding that a paper’s likelihood of acceptance
is enhanced if it contains a new theory with significant results. This finding also marries with Daft (1995) who found
that he rejected fully one-half of the 111 manuscripts he reviewed in the organisational behaviour area because they
lacked theory.
These results are consistent with the idea of the continuity of academic development with incremental advance in
theory, particularly from other disciplines, being valued by the reviewing community. Reiter and Williams (2004),
for example, show that ‘accounting scholars have . . . imported reputation structures for knowledge production and
validation from economics’. Accounting scholars also commonly import theories from finance and from sociology.
There were also some factors that while less important can still be helpful. If the research topic has not previously
been published in the journal or references previous work in the journal, this influences some referees as does using an
in-favour methodological approach. Authors will also benefit from being aware of current methodological approaches.
Certain factors detract from the chances of publication. First, replications were not popular. This confirms Kerr et
al.’s (1977) result. Second, the use of secondary data was also a negative factor. Both these factors do not provide a new
contribution, which damages chances of publication. Third, polemic pieces were also unpopular with many reviewers.
Such contributions may be more difficult to assess so reviewers may tend to play safe and reject. This is consistent with
the view that in a high rejection discipline, such as the social sciences, reviewers seem to follow the rule: if in doubt,
reject (Cummings et al., 1985). Also the typical respondent is a US professor and this type of research approach is not
favoured in the US environment. Fourth, work that overlaps other manuscripts (published or unpublished) was another
negative factor. The overlap presumably erodes the paper’s new contribution. Fifth, accessibility of the manuscript was
also extremely important. Poor writing standards, excessive length and even a low standard in any single key area of
the manuscript all damaged chances of acceptance.
Most of these factors that enhanced or detracted from the chances of publication are controllable by the
researcher—for example, originality, appropriate statistical analysis, or polemics. However, in certain cases an element
of luck appears present in the publication process. The overwhelming importance of statistical significance is notable.
If a project is well designed, and well executed, but does not yield significant results this appears a severe impediment
to publication. It is not something that the authors can legitimately do anything about. However, it does put potential
pressure on authors to manipulate or, even doctor, the results. Similarly, it is not always possible to establish ex ante
exactly whether results will be generalisable.
Just as several categories of factors enhanced or detracted from publishability, several others were relatively unim-
portant. Prior dissemination of the manuscript had only a marginal positive impact. In general, the reviewer’s personal
knowledge of the author or the author’s nationality proved unimportant. These issues although not inherent to the
research content, reflect aspects of the research environment. For instance the dissemination of the manuscript is not
necessary per se nor will it necessarily improve the research.
This self-perception by reviewers that author knowledge was not important should reassure authors, particularly
newer ones. Sometimes, it is suggested that authors believe prior author knowledge can bias the review system and can
contribute to a gate-keeping effect (Crane, 1967). Reviewers, on the other hand, seem remarkably unconcerned. Most
were neutral about knowing the identity of authors and professed not to be influenced by such knowledge. A caveat
is that a minority of reviewers were influenced by the author having a strong reputation or by a solicited manuscript.
This does, however, contrast with Kerr et al.’s (1977) finding that strong author reputation was very important in
management and social science journals. Outside these specific cases, though, there appears to be no effect.
T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113 109

There was a perhaps surprising degree of homogeneity between the responses of critical and mainstream researchers.
The two groups were consistent in stressing the importance of statistical significance, appropriate statistical methods,
the importance of theory, and of originality. Prima facie, the characteristics which enhanced the publishability of
manuscripts, therefore, appeared to transcend the particular research area. The one important area in which they differed
was that critical researchers were more concerned with the generalisability of case study material than mainstream
researchers. More research into this area would be useful to see whether critical and mainstream researchers do indeed
have very similar approaches to the broad criteria on which manuscripts are evaluated—even though they approach
that research from very different perspectives.
The impact of nationality is interesting. The overall unimportance of national origin is also encouraging for non-US
authors attempting to publish in US journals. For instance, UK authors publish comparatively infrequently in US
journals leading to suggestions of national bias (Brinn et al., 2001). However, the majority of respondents were US
based and perceived that national origin had no effect. At the margins, however, nationality appeared to play some
part. Both reviewers answering from a UK and a US journal perspective were inclined generally to be more positive
about authors they believed to be of the same nationality of the journal. In turn, this may reflect a reluctance to publish
data that is not of the same nationality as the journal.
Reviewers thus seem to actively strive to keep their own personal biases from intruding into the decision process.
However, this may also consciously or unconsciously reflect the reviewers’ opinions of how they should aspire to
behave. It may not necessarily reflect their actual behaviour. Occasionally, for example, background characteristics
did influence the likelihood of publication. In particular, non-US researchers were more affected if they believed
that authors had been critical of their research in the past. Although reviewers may differ in their perceptions of the
importance of different aspects of the manuscript, they seek to be consistent within that framework. The great majority
of reviewers, for instance, resisted the temptation to dismiss a theory with which they did not agree. The type of data
and the type of statistics used were broadly regarded as unimportant. It is possible that this is explained by the caveat
that data and statistics must be appropriate to the topic studied. Reviewer history of publishing in the area was also
unimportant. Reviewers were not affected negatively by indifference to a theory or by lack of familiarity with the
statistics.
However, personal preferences are not totally excluded from the review process. Factors outside the authors’ control
are also important. The editor’s choice of reviewer can be critical. If the reviewer was familiar with the type of approach,
agreed with your theory, had previously published in the area or had expertise in the area these can all work in the
potential authors’ favour. And, in some cases, if the reviewer was aware of your strong reputation, this can also prove
beneficial. A clear instance of subjectivity is that if the results were not statistically significant, but the reviewer was
interested in the results this will enhance publishability. This reflects the view that the research process is not necessarily
a quest for objective truth, but is often dominated by the subjective decisions of editors and reviewers (Morgan, 1985,
p. 63).
There is, overall, clearly a fine line between setting work in the context of the literature and failing to make any
substantive contribution to it. Not setting the research within a stream of research literature is a fault, but so is failing to
extend that literature. This reflects the classic approach to incremental gains in knowledge. Continuity is valued, and
work needs to be embedded within the journal and subject’s literature. At the same time, although perhaps to a more
limited degree, new approaches and new theory are valued.
Although the reviewers’ responses were sometimes dispersed, these differences did not appear systematically related
to background factors (such as age, gender, time as reviewer, length of service and nationality) in any obvious way. The
evidence is not strong enough to indicate a different outcome to the review might occur. There were two exceptions.
First, the findings on nationality and the journal perspective used tended to reinforce each other. Reviewers answering
from a US journal perspective were much more inclined than reviewers adopting a UK perspective to place important
emphasis upon methodological issues. In particular, they valued significant results, the use of appropriate statistics,
were concerned about the generalisability of sample data and were much more reluctant to publish case study data.
Meanwhile, UK reviewers were more in favour of interview data and case study data which was generalisable. This
confirms the notion that US journals are much more methods-driven than those in other countries, particularly the
UK. In particular, they favour positivistic research rather than interviews or case studies. This confirms the work of
authors such as Fleming et al. (2000), Reiter and Williams (2002) and Williams et al. (2006) who show that in the US
positive, empirical research dominates with other research such as behavioural research, interview studies and case
studies being marginalised. Second, female reviewers and younger review board members (as judged by physical age,
110 T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113

time of review and length of service) appeared somewhat stricter in some respects than other reviewers. It is possible
that younger reviewers who are at a relatively early stage of their careers may value publications more and so apply
higher standards. Female reviewers may feel a need to demonstrate higher standards in a traditionally male-dominated
environment. There is no certain explanation for the results for gender or for age. The finding on age does, however,
concur with some prior research in organisational behaviour (cited in Cummings et al., 1985) that senior reviewers
recommend more manuscripts for publication than junior reviewers.
There are several limitations with this research. First, a potential, but unavoidable, problem with this type of research
is that self-reporting may produce inaccurate answers. In particular, reviewers may not be prepared to admit bias even
if they are aware of it. Even if this is true, however, the results represent a view of how reviewers believe that they do
behave and should aspire to behave. As such it presents a picture of the reviewers’ thought processes and the ethos that
underlies the review process. A second problem is that opinions are inherently subjective. Individual respondents may
differ in their understanding or interpretation of the questions asked. This problem is inherent, and difficult to avoid, in
questionnaire studies. A third limitation is that we asked our respondents to treat each statement in isolation. In reality,
issues may interact with each other to produce a more complex effect. It is, however, difficult to capture this type of
effect in survey data or indeed using other research methods. Fourth, the respondents were mainly English-speaking and
most were US academics. Although this reflects the characteristics of the majority of accounting academics on editorial
boards, this means that the findings may be less relevant to non-US journals. Finally, although the reviewers may agree
in theory on the relative factors which will enhance or detract from publication, in practice, their interpretation of the
various factors may differ between reviewers and between manuscripts. Moreover, their relative weighting of each
factor may vary.

6. Conclusions

This research was based on 129 responses from experienced members of accounting review boards. They gave their
views on 87 statements which affected the publishability of manuscripts in accounting journals. They indicated certain
factors which enhanced the chances of publication and also some which detracted from publication.
Reviewers are sometimes accused of arbitrariness and subjectivity. In this study, however, there are many areas that
show a high degree of consistency. There is substantial agreement about the important characteristics of a manuscript.
This agreement appears to transcend the particular research area. In certain cases, however, there is some evidence
of some variability in reviewer opinion about important characteristics of the manuscript (importance of statistical
significance, presence of theory and originality). Therefore, although reviewers generally appear not to allow their
personal prejudices to affect the review process, they do, sometimes have different views of what is important. Reviewers
perceive they are indifferent to their knowledge of the authors or to the authors’ nationality. They are, however,
affected if they believe an author has a strong reputation or if they believe the editor has solicited a manuscript. In
addition, the editor’s choice of particular reviewers is still crucial as reviewer opinion is contingent upon factors such as
familiarity with the topic or prior experience of the area. Even though authors can target their research to have desirable
characteristics our results may, therefore, reinforce the perception that some ‘luck’ is involved in the reviewing process.
Our research demonstrates a number of factors that either enhance or detract from publication. The four leading
factors which will enhance the chances of acceptance are statistical significance, originality, appropriate statistics
and application of theory. By contrast the five leading factors which detract from publication are replications, use
of secondary data, polemic pieces, overlapping work and manuscript accessibility. In addition, our research suggests
that reviewers for US journals are more concerned than reviewers for non-US journals with statistical significance,
appropriate statistics, and generalisable results from samples and case studies.
This information is potentially useful to all those involved in the review process. Potential authors can see what
reviewers perceive to be important. Meanwhile, editors and reviewers can benchmark their own opinions against those
of a large number of experienced reviewers.
Further research could examine if reviewing standards vary by individual journals, particularly by the quality of
journals. It would be interesting to know if reviewers apply different standards depending on the journal involved. In
particular, a more focused study investigating the difference between critical and mainstream researchers would be
useful, perhaps using interviews such as Parker, Guthrie, & Gray (1998). An interdisciplinary approach to compare
practices in other disciplines so as to obtain a clearer picture of the reviewing process might also be fruitful. In addition,
studies in accounting which examine the issue of interviewer reliability would be interesting.
T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113 111

Overall, our results provide a degree of quantifiable insight into the characteristics of academic manuscripts that
contribute to, or detract from, their publishability. The international nature of the research, the broad range of journal
reviewers represented and the experience of the reviewers all enhance the applicability of these results.

Acknowledgements

I would like to acknowledge the help of participants at the European Accounting Association Conference in Gothen-
berg in 2005 and at the Financial Reporting and Business Communication Conference in 2005. I am also grateful to
Jan Richards, and Mark Clatworthy, for analysing and processing additional data after Tony’s untimely death.

Appendix A

Editorial Board Members: journal perspective

UK journals
Accounting and Business Research 3
Accounting Organizations and Society 3
Financial Accountability and Management 1
Journal of Business Finance and Accounting 6
Management Accounting Research 4
Total 17

US journals
Accounting Enquiries 2
Accounting Historians Journal 2
Accounting Horizons 1
Advances in Accounting 2
Advances in Taxation 2
Auditing: A Journal of Practice and Theory 6
Behavioral Research in Accounting 3
Intelligent Systems in Accounting, Finance and Management 1
International Journal of Accounting 1
International Journal of Accounting Information Systems 1
Issues in Accounting Education 8
Journal of Accounting and Public Policy 1
Journal of Accounting, Auditing and Finance 1
Journal of Accounting Education 1
Journal of Finance 1
Journal of International Accounting, Auditing and Taxation 4
Journal of International Accounting 2
Journal of Management Accounting Research 3
Journal of Accounting Information Systems 1
Journal of Information Systems 1
Journal of the American Taxation Association 1
Research in Accounting Regulation 1
The Accounting Review 7
Total 53

Australian journals
Accounting, Auditing and Accountability Journal 2

Grand Total 72

References

Bakanic, V., McPhail, C., & Simon, R. J. (1987). The manuscript review and decision-making process. American Sociological Review, 52(5),
631–642.
112 T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113

Ballas, A., & Theoharakis, V. (2003). Exploring diversity in accounting through faculty journal perceptions. Contemporary Accounting Research,
20(4), 619–644.
Beattie, V., & Goodacre, A. (2004). Publishing patterns within the UK accounting and finance academic community. British Accounting Research,
20(4), 620–644.
Beyer, J. M., Chanove, R. G., & Fox, W. B. (1995). The review process and the fates of manuscripts submitted to AMJ. Academy of Management
Journal, 38(5), 1219–1255.
Briloff, A. (2004). Accounting scholars in the groves of academia in Pari Delicto. Critical Perspectives on Accounting, 15(6–7), 787–796.
Brinn, A., & Jones, M. J. (2007). Editorial boards in accounting: The power and the glory. Accounting Forum, 31(1), 1.
Brinn, A., & Jones, M. J. (2008). The composition of editorial boards in accounting: A UK perspective. Accounting, Auditing and Accountability
Journal., 21(1), 5–35.
Brinn, A., Jones, M. J., & Pendlebury, M. (1996). UK accountants’ perception of research journal quality. Accounting and Business Research, 26(3),
265–278.
Brinn, A., Jones, M. J., & Pendlebury, M. (2001). Why do UK accounting and finance academics not publish in top US journals. British Accounting
Review, 33, 223–232.
Brown, R., Jones, M., & Steele, T. (2007). Still flickering at the margins of existence? Publishing patterns and items in accounting and finance
research over the last two decades. British Accounting Review, 39(2), 125–152.
Campbell, J. P. (1982). Some remarks from the outgoing editor [Editorial]. Journal of Applied Psychology, 67(6), 691–700.
Campion, M. A. (1993). Article review check list: A criterion checklist for reviewing research articles in Applied Psychology [Editorial]. Personnel
Psychology, 56, 705–718.
Chan, K. C., Chen, C. R., & Cheng, L. T. W. (2007). Global ranking of accounting programmes and the elite effect in accounting research. Accounting
and Finance, 47(2), 187–220.
Chung, K. H., Park, H. S., & Cox, R. A. K. (1992). Patterns of research output in the accounting literature: A study of bibliometric distributions.
Abacus, 28(2), 88–105.
Crane, D. (1967). The gatekeepers of science: Some factors affecting the selection of articles for scientific journals. American Sociologist, 3, 195–
201.
Cummings, L. L., Frost, P. J., & Vakil, T. F. (1985). The manuscript review process: A view from the inside on coaches, critics, and special cases.
In L. L. Cummings & P. J. Frost (Eds.), Publishing in the organizational sciences (1st ed., pp. 471–473). Homewood: RD Irwin.
Czyzewski, A. B., & Dickinson, H. D. (1990). Factors leading to the rejection of accountants’ manuscripts. Journal of Accounting Education, 8,
93–104.
Daft, R. L. (1995). Why I recommend that your manuscript be rejected and what you can do about it. In L. L. Cummings & P. J. Frost (Eds.),
Publishing in the Organizational Sciences (2nd ed., pp. 164–182). Thousand Oaks: Sage.
Daft, R. L., Griffin, R. W., & Yates, V. (1987). Retrospective accounts of research factors associated with significant and not-so-significant research
outcomes. Academy of Management Journal, 30(4), 763–785.
Ellison, G. (2002). Evolving standards for academic publishing: A q–r theory. Journal of Political Economy, 110(5), 994–1034.
Fleming, R. J., Graci, S. P., & Thompson, J. E. (2000). The dawning of the age of quantitative/empirical methods in accounting research: Evidence
from the leading authors of The Accounting Review, 1966–1985. Accounting Historians Journal, 27(1), 43–72.
Gilliland, S. W., & Cortina, J. M. (1997). Reviewer and editor decision-making in the journal review process. Personnel Psychology, 50,427–452.
Gomez-Mejia, L. R., & Balkin, D. B. (1992). Determinants of faculty pay: An agency theory perspective. Academy of Management Journal, 35(5),
921– 955.
Gottfredson, S. D. (1978). Evaluating psychological research reports: Dimensions, reliability, and correlates of quality judgements. American
Psychologist, 920–934.
Hasselback, J. R., & Reinstein, A. (1995). A proposal for measuring scholarly productivity of accounting faculty’. Issues in Accounting Education,
10(2), 269–306.
Heck, J. L., Jensen, R. E., & Cooley, P. L. (1990). An analysis of contributions to accounting journals. Part 1. The aggregate performances.
International Journal of Accounting, 26, 1–17.
Humphrey, C. (2001). Paper prophets and the continuing case for thinking differently about accounting research. British Accounting Research, 33,
91–103.
Jauch, L. R., & Wall, J. L. (1989). What they do when they get your manuscript: A survey of academy of management reviewer practices. Academy
of Management Journal, 32(1), 157–173.
Jones, M., & Roberts, R. (2005). Individual publishing patterns: An investigation of leading UK and US accounting and finance journals. Journal
of Business Finance and Accounting, 32(5), 107–1140.
Kerr, S., Tolliver, J., & Petree, D. (1977). Manuscript characteristics which influence acceptance for management and social science journals.
Academy of Management Journal, 20(1), 132–141.
Lee, T. (1997). The editorial gatekeepers of the accounting academy. Accounting, Auditing and Accountability Journal, 10(1), 11–30.
Lee, T., & Williams, P. (1999). Accounting from the inside legitimising the accounting academic elite. Critical Perspectives on Accounting, 10,
867–895.
Lowe, A., & Locke, J. (2005). Perceptions of journal quality and research paradigm: Results of a web-based survey of British accounting academics.
Accounting, Organizations and Society, 30, 81–98.
Mitchell, T. R., Beach, L. R., & Smith, K. G. (1985). Some data on publishing from the authors’ and reviewers’ perspectives. In L. L. Cummings
& P. J. Frost (Eds.), Publishing in the organizational sciences (1st ed., pp. 183–194). Homewood: RD Irwin.
Morgan, G. (1985). Journals and the control of knowledge: A critical perspective. In L. L. Cummings & P. J. Frost (Eds.), Publishing in the
organizational sciences (1st ed., pp. 63–75). Homewood: RD Irwin.
T. Brinn, M.J. Jones / Accounting Forum 32 (2008) 89–113 113

Over, R. (1982). Research productivity and impact of male and female psychologists. American Psychologist, 37(1), 24–31.
Parker, L., Guthrie, J., & Gray, R. (1998). Accounting and management research: Password from the gatekeepers. Accounting, Auditing and
Accountability Journal, 11(4), 371–402.
Pondy, L. R. (1995). The reviewer as defence attorney. In L. L. Cummings & P. J. Frost (Eds.), Publishing in the organizational sciences (2nd ed.,
pp. 183–194). Thousand Oaks: Sage.
Reiter, S. A., & Williams, P. F. (2002). The structure and progressivity of accounting research: The crisis in the academy revisited. Accounting
Organizations and Society, 27(6), 575–607.
Rodgers, J. L., & Williams, P. F. (1996). Patterns of research productivity and knowledge creation at The Accounting Review: 1967–1993. Accounting
Historians Journal, 23(1), 51–88.
Rousseau, D. M. (1995). Publishing from a reviewer’s perspective. In L. L. Cummings & P. J. Frost (Eds.), Publishing in the organizational sciences
(2nd ed., pp. 151–163). Thousand Oaks: Sage.
Schneider, B. (1995). Some propositions about getting research published. In L. L. Cummings & P. J. Frost (Eds.), Publishing in the organizational
sciences (2nd ed., pp. 216–226). Thousand Oaks: Sage.
Stahl, M. J., Leap, T. L., & Wei, Z. Z. (1988). Publication in leading management journals as a measure of institutional research productivity.
Academy of Management Journal, 31(3), 707–720.
Tinker, T. (2001). Briloff and the lost horizon. Critical Perspectives on Accounting, 12(2), 149–152.
Tinker, T. (2006). Accounting journal assessment exercise [Editorial]. Accounting Forum, 30(3), 195–208.
Whitley, R. (1986). The transformation of business finance into financial economics: The roles of academic expansion and changes in US capital
markets. Accounting, Organizations and Society, 11(2), 171–192.
Williams, P. F., & Rodgers, J. L. (1995). The accounting review and the production of accounting knowledge. Critical Perspectives on Accounting,
6(3), 263–287.
Williams, P. F., Jenkins, J. G., & Ingraham, L. (2006). The winnowing away of behavioural accounting research in the US: The process for anointing
academic elites. Accounting, Organizations and Society, 31(8), 783–818.

You might also like