Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

April

 18,  2013  

ImpactStory Grant Proposal

ImpactStory is a nonprofit organization working to transform scholarly communication by

bringing web-native scholarship into the academic reward system. We request two years of

runway funding to help us grow into self-sustainability.

1. What is the main issue and why is it important?

The web is transforming scholarly communication in three ways:

● Scholars can create and publish diverse products beyond the traditional article: data,

software, assertions, figures, preprints and more.

● Scholars can publish these products on diverse new platforms like PeerJ, F1000 Research,

SSRN, ArXiv, Dryad, figshare, ICPSR, GitHub, and others.

● Research can engage diverse people beyond the academy, through blog commentary,

Twitter feeds, Wikipedia descriptions, and Facebook discussions.

The short-term result will be more open, more efficient scholarly products. In the long term, the

move to web-native scholarship will topple the paper-native model we still use today—

essentially unchanged since its 17th century birth (Cassella & Calvi, 2010). In its place, web-

native scholarship will instantly publish diverse products throughout the research process,

collectively curating them in active online communities (Priem, 2013; Smith, 1999).

The technologies exist to support this transition now, but scholars have been slow to adopt them.

Researchers face a recurring dilemma: even in situations where scholarship may be best served

by a publishing web-native product (a dataset, blog post, software repo, etc), one’s own career is
often better served by instead putting that effort into traditional article-writing. If we want to

move to a more efficient, web-native science, we must make that dilemma disappear: what is

good for scholarship must become good for the scholar. Instead of assessing only paper-native

articles, books, and proceedings, we must build a new system where all types of scholarly

products are evaluated and rewarded.

Altmetrics help fix the broken reward system

The key to this new reward system will be altmetrics: a broad suite of online impact indicators

that goes beyond traditional citations to measure impacts of diverse products, in diverse

platforms, on diverse groups of people (Priem, Taraborelli, Groth, & Neylon, 2010). Altmetrics

leverage the increasing centrality of the Web in scholarly communication, mining evidence of

impact across a range of online tools and environments:

General users Scholarly users

Recommend Mainstream media, patient Faculty of 1000 review, GitHub stars


communities

Cite or apply Wikipedia citation Scholarly citation

Discuss Twitter, Facebook, blog mentions Scholarly blogs, article comments,


tweets from scholars

Save Social bookmarking (Delicious) Social reference managers


(Mendeley, CiteULike)

View HTML views PDF downloads, dataset downloads

2
These and other altmetrics promise to bridge the gap between the potential of web-native

scholarship and the limitations of the paper-native scholarly reward system. A growing body of

research supports the validity and potential usefulness of altmetrics (Eysenbach, 2012; Haustein

& Siebenlist, 2011; Li, Thelwall, & Giustini, 2011; Nielsen, 2007; for more see the altmetrics

Mendeley collection1). Eventually, these new metrics may power not only research evaluation,

but also web-native filtering and recommendation tools (Neylon & Wu, 2009; Priem &

Hemminger, 2012; Taraborelli, 2008).

However, this vision of efficient, altmetrics-powered, and web-native scholarship will not occur

accidentally. It will require advocacy to promote the value of altmetrics and web-native

scholarship, online tools to demonstrate the immediate value of altmetrics as an assessment

approach today, and an open data infrastructure to support developers as they create a new,

web-native scholarly ecosystem. ImpactStory aims to provide all three, kickstarting a transition

into a new way of communicating scholarship.

2. What altmetrics products exist now?

A number of products provide impact-tracking tools, but these are limited by the early stage of

the market. No one is creating an open data infrastructure for altmetrics.

Citation-based profiles

Thomson Research InCites, Elsevier SciVal, and Symplectic Elements are publisher-owned

tools that facilitate impact analysis for institutional subscribers. They offer limited altmetrics

data, but primarily citations. They are expensive and closed.

                                                                                                               
1
http://www.mendeley.com/groups/586171/altmetrics/

3
There are a few free profile-based citation tools. Google Scholar Profiles provides article-level

citation data. Unfortunately, its impact data is completely closed, with no API or bulk download

access, so it cannot be built upon or enhanced. Microsoft Academic Search Academic Profiles

provides citation (not altmetrics) data and does provide an API, but its data remains too sparse

for general use.

Altmetrics-based profiles for publishers

The PLOS article-level metrics (ALM) application is an open-source system used by PLOS to

display altmetrics on their own article pages. Data is open, but only available for PLOS journals

(other journals can also run ALM, if they install, host, and run their own copies, but to date none

have). Altmetric.com is a for-profit startup tool that, like ImpactStory, gathers and displays

altmetrics. However, it is aimed at publishers rather than researchers, and full access to data

requires a £500 subscription per user.

Altmetrics-based profiles for researchers

Plum Analytics, another for-profit altmetrics startup, is beta-testing “PlumX,” a product that,

like ImpactStory, lets individual researchers see their altmetrics. However, Plum’s pricing levels,

source code and data are all closed. If a researcher’s institution does not subscribe (and currently

only two do), she cannot access her metrics, and neither can anyone else. Academia.edu,

ResearchGate, and Mendeley offer profiles for individual consumers. These for-profit tools

only display metrics derived from their own hosting platform.

4
Synthesis

Except for PLOS ALM (which is scoped only to traditional journals), this space is dominated by

investor-backed, commercial organizations. This is not inherently bad; indeed, it illustrates the

growing demand for altmetrics tools. However, these market-driven organizations are building

on closed data and closed code, with limited incentive to invest in shared infrastructure. This is

unfortunate, because the full potential of altmetrics lies in a world of diverse services built atop

an open data infrastructure—a data commons to support not just a few altmetrics providers, but

an entire ecosystem of next-generation assessment, recommendation, and filtering services.

3. Why are the proposers qualified to address the issue?

ImpactStory was incorporated as a non-profit on December 18, 2012 by Jason Priem and Heather

Piwowar, with the following charter:

It is the mission, duty, and purpose of the Corporation to create and distribute educational
tools, data and research so that the community can learn about the full online impact of
diverse research products. [...] Innovative publishers, institutions, funders, and scholars
recognize the limitations of the current approach. They want to track the full online
impact of their diverse research products, but lack tools and data. Other, more
conservative members of the community are interested in this approach but waiting to
learn more, or to be convinced of the value. The purpose of the Corporation is to meet
this societal need.

The initial Board of Directors consists of Priem, Piwowar, Cameron Neylon (Public Library of

Science), and John Wilbanks (Sage Bionetworks, Ewing Marion Kauffman Foundation). Tax-

exempt paperwork was filed in January 2013; notification of 501(c)3 status is expected in July

2013.

5
Since our origins in an all-night hackathon, ImpactStory has had the same two leaders. Heather

Piwowar published one of the first papers measuring the association between open research data

and citation rate, and has continued to publish on and work towards incentives for open science.

She has expertise in statistical analysis and ten years of software development and support

experience in small tech companies. Jason Priem coined the term altmetrics and has helped

define the research field growing up around it, organizing the first two altmetrics workshops, as

well as publishing and speaking widely on the topic. He has built several successful open-source

software applications, and has practical experience and formal training in art, education, design,

and information visualization.

4a. What is the approach being taken?

A vibrant, altmetrics-powered world of web-native scholarship will require early guidance and

public infrastructure. The market is not providing this. ImpactStory will.

Existing for-profit providers have approached altmetrics data as a commodity to be sold. This

stance supports a relatively straightforward business model, and so is understandably attractive

to investor-backed startups. It leads, however, to a negative externality: a fragmented landscape

of tightly guarded silos containing mutually incompatible data (an outcome we have already seen

in the citation database market). It is an approach on the wrong side of history.

The real value of altmetrics data, like other Big Data streams, is not in the numbers themselves;

the value is in what the community can build on top of the data: a new generation of web-native

assessment, filtering, and evaluation tools (Wilbanks, 2011). In this way, open altmetrics data is

6
quite like the network of open protocols and providers behind the World Wide Web: it is

essential infrastructure for a revolutionized communication ecosystem. ImpactStory is designed

to build and sustain this infrastructure—to create an altmetrics data commons. Our work will

take a three-pronged approach, focusing on advocacy, a free altmetrics webapp, and an open

altmetrics data platform.

Our advocacy will continue along the path set in our first grant. We will give talks at universities

and conferences, lead workshops, publish articles in high-profile venues like Nature, and pursue

more research describing and validating altmetrics. In doing so, we will bring growing attention

to the emerging altmetrics movement. More importantly, though, our advocacy will continue to

help shape the movement, keeping openness and interoperability central to emerging

conversations.

The impactstory.org webapp will be significantly upgraded to offer scholars a powerful

replacement for their online CVs. This “altmetrics CV” will support broad, grass-roots adoption

of altmetrics from working researchers, especially progressive researchers already interested in

experimenting with web-native products like blogs, open-source software, and public datasets.

Hearing about open altmetrics is one thing— but seeing one’s own altmetrics, with the ability to

freely download them, is far more powerful.

Finally, an improved ImpactStory API will form the hub of an open data infrastructure

connecting dozens of diverse data providers (like Mendeley, Twitter, or Dryad) with a

constellation of application developers. Applications include impact-aware PDF readers,

institutional repository usage widgets, literature search tools, enhanced citation indexes, faculty

7
profile collections, funding databases, institutional and regional impact assessments, expert

identification systems, post-publication peer-review platforms, and recommendation engines—in

fact, we’ve had requests for data from projects in each of these categories already. As we

improve its scalability, our open API will support an ecosystem in which impact data flows like

water among these and other diverse applications, with ImpactStory as the “village well”

supplying a shared, open, always-on stream of impact data.

4b. What is the approach being taken: sustainability

Achieving this long-term mission will require a stable revenue stream. Here we describe our plan

to secure this. Borrowing from the “Lean Canvas,” a common startup business plan approach

(Maurya, 2012; Ries, 2011), we will examine our product, markets, unique advantages, and

finally risks and mitigations.

Product

Our product will be based on the existing ImpactStory application:2 users identify their scholarly

products, then ImpactStory gathers and displays classified and normalized impact metrics.

                                                                                                               
2
http://impactstory.org

8
Figure  1:  A  sample  article  displayed  in  ImpactStory  

 
This functionality will remain available to anyone, for free. To earn revenue, we will also roll

out premium features for easy account creation, exploration, and automatic updating via VIVO,

ORCID, and other databases and search tools (see section 6).

Markets

Our primary market will be academic libraries buying premium ImpactStory subscriptions for

faculty. Libraries have three reasons to make this purchase:

1 It improves researchers’ productivity by saving them time spent updating CVs—and does

so much more cheaply than Research Information Management (RIM) systems like

Symplectic Elements or Elsevier’s Pure.

2 It empowers those institutional faculty who are at the cutting edge of scholarly

communication, helping to improve both their funding pitches (especially in assessments

like the UK’s REF) and the institution’s reputation.

3 More subtly, but perhaps most important, it helps establish the library as more than just a

place for books and periodicals, but a forward-thinking, driving force behind scholarly

communication on campus. We hear again and again how keen libraries are to make this

9
transition, despite growing pressure on library budgets (See Appendix D). Indeed, budget

pressure is often driving interest in novel, forward-looking roles for libraries, as librarians

seek to maintain and expand their institutions’ roles in this time of change.

We have seen significant evidence of library interest in the last year: we have received 20

invitations to speak at university libraries, 11 invitations to speak at conferences (usually with

travel reimbursement), and 3 webinars (see Appendix B for details). This interest is particularly

encouraging given our focus on investing in the product over investing in sales to date.

Furthermore, university libraries see value in altmetrics for researchers: “we do plan to have

every single researcher at the University of Pittsburgh able to look at their own [altmetrics]

profile and share that,” said a director recently3.

The large number of research universities combined with our low projected expenses ($302k/yr)

gives us room to reach sustainability relatively quickly in this market, even at a low initial

penetration rate. There are 657 PhD-granting institutions with over 2000 FTEs in the United

States: they have approximately 8,088,302 FTEs combined4. To speed adoption, we will aim at a

low annual subscription of around $.20/FTE, or $2000-5000 for medium-sized universities

(compare to around $15k for other research productivity tools like RefWorks or Mendeley

Institutional Edition, or $50k for a RIM system like Pure).

                                                                                                               
3
http://www.thedigitalshift.com/2013/02/research/as-university-of-pittsburgh-wraps-up-altmetrics-pilot-
plum-analytics-announces-launch-of-plum-x/
4
http://nces.ed.gov/ipeds/datacenter

10
Innovation diffusion theory predicts several adoption tiers (Moore, 2006; Rogers, 2003). The

table below shows (in bold) market penetration needed to meet our projected expenses at various

price points and adoption tiers:

Innovators,
Innovators, early adopters,
Innovators, early adopters, early majority,
Aggregate Innovators early adopters early majority late majority
Price per FTE FTEs (2.5%) (16%) (50%) (84%)

$0.05 8,088,302 $10,110 $64,706 $202,208 $339,709


$0.10 8,088,302 $20,221 $129,413 $404,415 $679,417
$0.20 8,088,302 $40,442 $258,826 $808,830 $1,358,835
$0.50 8,088,302 $101,104 $647,064 $2,022,076 $3,397,087

Approx. 657 16 105 329 552


number of
universities

After launching we will measure adoption and adjust our prices accordingly, as recommended by

software pricing advisors (Davidson, 2009). We anticipate high market penetration with

Innovators, Early Adopters, and the Early Majority university libraries in the next three years.

Unique advantages

From a strictly market perspective, we offer potential library customers several unique

advantages. Heather and Jason are leaders in the altmetrics and open science communities (see

section 8, and attached CVs), boosting our tool’s visibility and credibility. As the only

researcher-led altmetrics tool, ImpactStory is uniquely positioned to understand and meet

researcher needs. Most importantly, our open data infrastructure, backed by an open-source

codebase and a self-sustaining non-profit organization, makes ImpactStory a partner the

11
scholarly community can trust and rely on. Open data is not a side benefit for us—it’s our raison

d’etre, and it’s been in our DNA from day one.

Risks and mitigations

This business plan, like all plans, carries with it risk. Below we describe three key risks, along

with mitigation strategies:

If academic library budgets have no room for this product:

● Pursue secondary revenue streams, including: research centers and funders (several

have expressed interest; see Appendix B) and scholarly publishers (many of these have

expressed interest in paying for embedded metrics).

● Continue advocacy efforts, promoting grassroots adoption. Faculty requests for

ImpactStory subscription will carry much weight with librarians.

If we are unable to earn sufficient revenue to become self-sustaining:

● Examine possibility of becoming a membership organization made up of publishers,

funders, and libraries.

● Worst case, dissolve—leaving behind a pivotal advocacy effort, a reusable, open

codebase, and a huge open altmetrics dataset for researchers.

If demand outstrips our ability to scale:

● Use the unanticipated revenues from strong demand to outsource coding to contractors;

we are experienced managinexternal consultants, having done so on two large

ImpactStory projects already.

12
Summary

Our plans sound a lot like those of a startup, and not by accident—we value the agility,

responsiveness, and market thinking of startup culture, while simultaneously emphasizing our

unique value as a mission-driven non-profit in this field. Our goal is to drive mission and

sustainability simultaneously through earned income based on a product that promotes broad

metrics, open data, and a focus on grassroots adoption by working scholars.

5. What will be the output from the project?

This grant will promote altmetrics and web-native scholarship via continued advocacy, an

altmetrics webapp, and an open data API, all managed by a self-sustaining organization.

Continued advocacy

We will continue to provide leadership in the areas of web-native scholarship and altmetrics. In

the two years of the grant, we will:

● Give 24 new presentations to libraries, societies, publishers, and funders,

● Publish 6 new advocacy articles, several in high-visibility journals,

● Work with PLOS, NISO, and others to draft open altmetrics standards,

● Provide altmetrics data used in 6 new research articles, and

● Embed ImpactStory data in over 200 IRs, online journals, and other tools.

Altmetrics webapp

Building on our current platform, we will add new features to streamline the account creation

process, improve value to researchers, and increase engagement. Existing features and some new

13
ones will remain free to all; many of the new features, however, will form the premium service

sold to libraries (we will also offer individual subscriptions, though we expect limited uptake of

these). Premium new features are highlighted:

● Account creation: We will streamline the profile-creation process: users will just enter

their name and email, then import their authored products in one click via Google Scholar,

ORCID, VIVO, or by uploading an old-fashioned PDF or Word CV. Imported data will

be parsed, saved, and enriched with altmetrics data in less than a minute.

● Improved value: Users will have a custom URL (like http://impactstory.org/JasonPriem)

that leads to a profile page including image, links to other online profiles, and a

comprehensive list of products (traditional and web-native) with citation and altmetrics

data built in. This profile will be complete enough to be used in place of an online CV—

or, if users prefer, it will be embeddable into an extant CV or website. An advanced

analytics panel will let researchers filter, explore, and visualize their entire research

histories to surface their most powerful stories—for instance when filling out the

“broader impacts” section of an NSF grant, a researcher could sort by “public impact” in

ImpactStory to uncover the products best demonstrating her impacts on the broader

public.

● Engagement: When an author publishes a new scholarly product online, ImpactStory will

automatically add it to her online profile. New impacts will also be automatically

detected and added. Users will receive weekly summaries of their new products and

impacts. This will make ImpactStory an attractive alternative to a traditional CV (and a

cheaper alternative to a full-fledged RIM system), since it will remove the burden of

manual updating. It will also keep researchers engaged with the application over time,

14
turning them from one-time to repeat users. For these users, altmetrics and web-native

products become parts of their scholarly identities, driving grassroots pressure to value

these new metrics and products.

Open data infrastructure

We will maintain our current API, but improve our ability to scale from hundreds of daily

requests to tens of thousands. This represents more than just a quantitative change, but a

qualitative one. We have had to reject at least a dozen data or API requests from researchers,

tool-builders, journals, and others simply due to scale constraints. Scaling up is a straightforward

programming task, but takes time; by getting that time, we gain the opportunity to throw the

doors open to a developer community already eager for ImpactStory data (see Appendices B and

D). We will also make it easier for external developers to add new data providers to the core

application, leveraging an increasingly interested open-source contributor community.

Self-sustaining organization

We will cover expenses and contracting with revenues earned from premium subscriptions to

institutions, supplemented with donations and individual subscriptions (see attached budget for

detailed projections).

6. What is the justification for the amount of money requested?

We anticipate becoming revenue-neutral in three years. Beginning in July 2016, we estimate our

steady-state revenue will be $305k/year, 82% from product income and 18% from donations and

special project grants in collaboration with strategic partners.

15
We estimate our ongoing operating costs at $302k/year. Most of this (56%) will pay for salaries

for the two ImpactStory employees (during the two years of the grant, we will also spend $125k

one-time on external contractors to accelerate progress toward scalability). Data sources will

cost about $72k/year at our anticipated scale, 24% of our expenses. About 12% of expenses will

go to continued outreach and market building, particularly travel and workshop hosting, with 8%

reserved for hosting, contracted-out codebase maintenance, and miscellaneous expenses.

We seek $500k from the Alfred P. Sloan Foundation to help with this transition: $250k per year

for two years. We will use this investment runway to ramp up our earned income and donation

program, as shown in the chart below. We expect the combination of our earned income in the

second year and the Sloan funding to exceed our expenses: we will bank the surplus earned

income to supplement our revenue in year three when the Sloan grant has ended. By year four

we expect our annual revenue to cover expenses.

Figure  2:  ImpactStory  revenue  sources  over  time  

 
 

16
7. What other sources of support does the proposer have

We have no other sources of support currently in hand or submitted, but we have received

several invitations to apply for supplemental grant funding with various community partners (e.g.

PKP, CDL, NSF software community, HubZero, Dryad), and are following up on these.

8. What is the status and output of current Sloan grant?

We are on track to meet or exceed most of our milestones by our current grant’s end. Specifically,

we had a goal to establish a sustainable organizational model for a mission‐driven organization.

We have filed incorporation paperwork for a nonprofit corporation in North Carolina, established

a Board of Directors, and present the business plan in this document.

We also had several product goals, which we are on track to meet:

● Goal: Visualisations of altmetrics in context: completed.

● Goal: 60 GitHub watchers, 20 forks. So far: we have 25 stars (the new name for

watchers) and 5 forks on our new codebase,5 plus 37 stars and 5 forks on the old

codebase.6

● Goal: A dozen new information sources, particularly data repositories. So far: 6 new

active information sources (now 21 total), with several more near completion.

                                                                                                               
5
https://github.com/total-impact
6
https://github.com/mhahnel/Total-Impact

17
We are on track to meet most of our use goals:

● Goal: 50k website visits, 30k uniques. So far: we’ve had 52,365 visits, 31,070 uniques.

● Goal: 15 publishers/repositories embedding metrics. So far: we have 4 publishers (more

than 10 journals), but many more in the pipeline. We have signed a MOU with Highwire

Press (1,700 journals and other publications).

● Goal: 5 research studies using ImpactStory data. So far: we know of at least 3, plus one

dissertation to start collecting data this year.

Perhaps most importantly, our code and API have been used by the community. A demo7 for

a new commercial altmetrics product (Plum Analytics) was based on our code, a hackathon

project8 was based on our API, an R interface has been written,9 a Javascript library has been

contributed,10 an R report generator11 has been written, and a faculty profile system12 enriched

with altmetrics data. We have collaborated with several other Sloan-funded projects, including

the Beyond The PDF 2 workshop and PLOS ALM, and have begun collaboration discussions

with representatives of other projects including ICPSR, Sage Bionetworks, and Hypothes.is. We

have are now planning with VIVO to embed ImpactStory metrics next to Web of Science

citation figures on VIVO profiles.

                                                                                                               
7
http://www.youtube.com/watch?v=pRnU8aJQQ0U
8
http://rmimr.wordpress.com/2012/12/08/rerank-it-rerank-pubmed-search-results-based-on-impact/
9
http://ropensci.github.io/rImpactStory/
10
https://github.com/highwire/opensource-js-ImpactStory
11
https://github.com/ropensci/ImpactReport
12
http://research.mblwhoilibrary.org/works/35002

18
Our efforts at altmetrics advocacy and education have been successful; we believe these are

vital for fostering the institutional change needed for widespread use of altmetrics and web-

native scholarship. We have:

● Given talks at over three dozen events in North America and Europe,

● Seen ImpactStory mentioned in more than 62 external blog posts,13

● Written 45 blog posts, which have together been viewed 3,330 times since September,

and accumulated over 1000 followers of the @ImpactStory twitter account.14

● Written several articles including two well-received15 Nature pieces and an MIT Press

book chapter; edited a special altmetrics issue of the ASIST Bulletin.

● Organized the international Altmetrics12 Workshop, the PLOS Altmetrics Collection,

and helped organize the PLOS ALM workshop+hackathon.

There is also one area where we have not met projections: so far we know of only five scholars

embedding ImpactStory data on CVs. We also have evidence of three ImpactStory reports

included in annual review or tenure/promotion packages, plus one grant submission. This has

been supplemented by many reports of ImpactStory data being used less formally, thought no

less meaningfully: to provoke conversations about broader impacts at high administrative levels

(see Appendix D). Moreover, these numbers are certainly undercounts, since we know many

users do not report their ImpactStory use. However, these still fall short of hoped-for uptake. We

believe there are two reasons:


                                                                                                               
13
https://delicious.com/hpiwowar/total-impact
14
https://twitter.com/impactstory
15
http://impactstory.org/collection/kg4ayx

19
First, for most of the grant our tool lacked several important technical features, including stable

profile URLs, easy product input, and editable profiles (these have since been added or are

underway). Second, it took until late in the grant to gain significant traction for the idea that

altmetrics could be beneficial on a CV. Our most recent evidence (see Appendix D) suggests that

early-adopting researchers are increasingly interested in employing altmetrics, but this change

took longer than we expected, and is still very much in progress. We believe that continued

altmetrics advocacy is the best approach to address this, as researchers indicate is having a

powerful—if gradual—effect.

Overall, we are confident that continued advocacy, an improved webapp, a scalable open API,

and a practical, high-growth business strategy will nurture ImpactStory into a powerful and

sustainable force supporting open, interoperable altmetrics. These in turn can be the foundation

of a new system of web-native scholarship—one that rewards scholars for creating diverse,

powerful, engaged products. Building on an open altmetrics infrastructure, we can build a new,

better, more efficient kind of scholarship.

20
Appendix A: references

Cassella, M., & Calvi, L. (2010). New journal models and publishing perspectives in the

evolving digital environment. IFLA Journal, 36(1), 7 –15.

doi:10.1177/0340035209359559

Davidson, N. (2009). Don’t Just Roll The Dice - A usefully short guide to software pricing. Red

gate books.

Eysenbach, G. (2012). Can tweets predict citations? Metrics of social impact based on Twitter

and correlation with traditional metrics of scientific impact. Journal of Medical Internet

Research, 13(4). doi:10.2196/jmir.2012

Haustein, S., & Siebenlist, T. (2011). Applying social bookmarking data to evaluate journal

usage. Journal of Informetrics, 5(3), 446–457. doi:10.1016/j.joi.2011.04.002

Li, X., Thelwall, M., & Giustini, D. (2011). Validating online reference managers for scholarly

impact measurement. Scientometrics, 91(2), 461–471. doi:10.1007/s11192-011-0580-x

Maurya, A. (2012). Running Lean: Iterate from Plan A to a Plan That Works (Second Edition.).

O’Reilly Media.

Moore, G. A. (2006). Crossing the Chasm: Marketing and Selling Disruptive Products to

Mainstream Customers (Revised.). HarperBusiness.

Neylon, C., & Wu, S. (2009). Article-Level Metrics and the Evolution of Scientific Impact. PLoS

Biol, 7(11), e1000242. doi:10.1371/journal.pbio.1000242

Nielsen, F. (2007). Scientific citations in Wikipedia. First Monday, 12(8).

21
Priem, J. (2013). Scholarship: Beyond the paper. Nature, 495(7442), 437–440.

doi:10.1038/495437a

Priem, J., & Hemminger, B. H. (2012). Decoupling the scholarly journal. Frontiers in

Computational Neuroscience, 6(19). Retrieved from

http://www.frontiersin.org/computational_neuroscience/abstract/14455

Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). alt-metrics: a manifesto. Retrieved

August 15, 2011, from http://altmetrics.org/manifesto/

Ries, E. (2011). The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to

Create Radically Successful Businesses (First Edition.). Crown Business.

Rogers, E. M. (2003). Diffusion of Innovations, 5th Edition (5th ed.). Free Press.

Smith, J. (1999). The deconstructed journal-a new model for academic publishing. Learned

publishing, 12(2), 79–91.

Taraborelli, D. (2008). Soft peer review: Social software and distributed scientific evaluation. In

Proceedings of the 8th International Conference on the Design of Cooperative Systems

(COOP ’08). Carry-Le-Rouet.

Wilbanks, J. (2011). Openness as infrastructure. Journal of Cheminformatics, 3(1), 36.

doi:10.1186/1758-2946-3-36

22
Appendix B: Evidence of interest from libraries, centers, and
publishers

If only our Faculty Annual Reporting system worked more like @impactstory.
—Stacy Konkiel, Indiana University librarian [reproduced here as it is a public tweet]

● Invited to speak at conferences: ALA Chicago 2013, Texas Librarian


● Invited to speak at conferences, reimbursed: CLA Ottawa 2012, ALA midwinter Seattle
2013, Access 2012, Fiesole workshop in Singapore 2013, Association of Research
Libraries membership meeting May 2012*, SPARC forum June 2012*, University of
Massachusetts and New England Area Librarian E-Science Symposia*, Pacific
Northwest Symposium on Emerging Roles for Librarians*, OLA 2013* (*invited but
wasn’t able to attend)
● Invited to give talks at university libraries: Duke, UBC, SFU, Purdue, Maryland, Furman,
Columbia x2, Ohio State, CUNY*, Occidental College*, American University Library,
Connecticut*, NIH, Florida, Oklahoma*, University of Alberta*, Illinois at Chicago*,
Pittsburgh*, New Brunswick* , Central Michigan, UNC (*invited but wasn’t able to
attend)
● Webinars: Florida librarians, SLA, ACRL

Research centers that have reached out:


● NCAR
● NESCent
● Discussion with GenomeBC, would use to track their own impact
● Discussions with Wellcome Trust, would use to track their own impact

Publishers embedding ImpactStory data (important because it raises our visibility among
researcher):
● Already using our product: pensoft, ubiquity press, peerev, CITAR, eLife, PeerJ
● Signed MOU: HighWire
● Have been in discussions with: Wiley, BioMed Central, Open Medicine
● Invited to trade conferences, including SSP in 2012 and 2013, the Highwire Spring
Conference 2012, Charleston 2012, STM conference, Council of Science Editors meeting
(teaching a class), and Emerging Trends in Scholarly Publishing workshop.

23
Appendix C: Description of ImpactStory

ImpactStory provides effective altmetrics. Our data is comprehensive. We support multiple


product types including slides, software, datasets, and webpages along with traditional articles.
Tweets, facebook discussions, and delicious reveal conversation beyond academia. Clicking on
these shows original text of most interactions, allowing for qualitative as well as quantitative
assessment.

Our data is open, released under as open a license as our data providers allow. The data can be
downloaded as JSON (to support building other applications), CSV (to support analysis and
visualization), or JavaScript widget code (for easy embedding). The URL for a collection is a
permaurl which is openly viewable by anyone, so can be tweeted or included on grant
applications.

ImpactStory’s data is accessible to all types of users: any researcher can explore collections of
their work via our web interface to surface data-driven stories for report to funders and
evaluators. The can also embed this information on their CV. Publishers and repositories can
embed ImpactStory data right next to their products.

Finally, ImpactStory goes beyond these three key tenets of evaluation data to make altmetrics
data meaningful through context: metrics are presented in a framework accounting for both
audience and engagement type. Users unfamiliar with Mendeley, for instance, can still
understand that their product has been highly saved by scholars. Significant Wikipedia activity
might earn a “highly cited by the public” badge. The metrics are presented in comparison to
other products (with a 95% confidence interval), rather than in isolation to facilitate
interpretation. See the figures below for examples:

24
25
Appendix D: Letters of support

[All the quotations and names have been redacted from this section, since we didn’t
have permission from all the authors to reproduce them.]

From librarians

[redacted]

From researchers and administrators

[redacted]

From toolmakers

[redacted]

26
Appendix G: Conflict of Interest statement

Heather Piwowar has been funded by DataONE and the Dryad Data Repository. Heather
Piwowar serves on the advisory board for F1000 Research.
Jason Priem and Heather Piwowar do not have other potential conflicts of interest or sources of
bias.

27

You might also like