Home | News & Events | Events | 2012 Events | NISO Webinars | November 14: Beyond Publish or Perish: Alternative Metrics for Scholarship

NISO Webinar: Beyond Publish or Perish: Alternative Metrics for Scholarship

November 14, 2012
1:00 - 2:30 p.m. (Eastern Time)

  • About the Webinar
  • Agenda
  • Registration
    Can't make it on the 14th? Register now and gain access to the archive for one year.
  • System Requirements:
    You will need a computer for the presentation and Q&A and a telephone for the audio.
    Please check your system to make sure it is ready to use Cisco WebEx: http://support.webex.com/support/system-requirements.html. It is your responsibility to ensure that your system is properly set up before each webinar begins.

About the Webinar

Increasingly, many aspects of scholarly communication—particularly publication, research data, and peer review—undergo scrutiny by researchers and scholars. Many of these practitioners are engaging in a variety of ways with Alternative Metrics (#altmetrics in the Twitterverse). Alternative Metrics take many forms but often focus on efforts to move beyond proprietary bibliometrics and traditional forms of peer referencing in assessing the quality and scholarly impact of published work. Join NISO for a webinar that will present several emerging aspects of Alternative Metrics.


Todd Carpenter, Executive Director at NISO

Read Todd's recent article at Scholarly Kitchen

Article-Level Metrics at PLOS
Martin Fenner, Technical Lead, PLOS Article-Level Metrics project
Article-Level Metrics have become an exciting new opportunity for publishers, funders, universities and researchers. The publisher Public Library of Science (PLOS) has started to collect and display citations, usage data, and social web activity for all their articles in 2009. The webinar will discuss the opportunities (and challenges) of Article-Level Metrics, from issues in collecting data to interesting results of data analysis.

Total-Impact and other altmetrics initiatives
Jason Priem, Ph.D. Student, Co-Principal Investigator, Impact Story

Altmetrics helps us track diverse scholarly impacts by looking in new places for evidence--public places like Wikipedia and Twitter, and scholarly environments like Mendeley and Faculty of 1000. Doing this lets us promote and reward new forms of Web-native scholarship in two ways. Broader measures of impact  help us move

  • beyond the article: we can value the increasingly important and powerful new genres of scholarly products like blog posts, software, and datasets, and
  • beyond the impact factor: we can value the observed impact of scholarly products themselves, across lots of different audiences and use types--not just awarding the prestige of where they're published. 

That said, altmetrics can be tricky to gather and understand. We'll discuss tools and frameworks to help turn rich but dense altmetrics data into data-supported stories that can help inform important conversations about what it means to make a scholarly impact. 

Unconventional Scholarly Communications
Aalam Wassef, Founder of Peer Evaluation

Participate in Aalam's survey on social networks at https://www.surveymonkey.com/s/VNZSNRZ

Scholars are blogging, microblogging, searching, sharing primary data, collaborating, discussing, rating, bookmarking articles in public folders, recommending links over public networks, offering live coverages of events and receiving badges, views, likes or mentions for all they do online and elsewhere. More than ever, scholars are communicating and getting credit for it, with no limitations as to style, format or environment, enjoying high levels of engagement and responsiveness from their peers.

  • How are all other parties concerned (librarians, public funders, policy makers, publishers universities, research centers) absorbing, supporting or rejecting all of the above?
  • Could “unconventional” communications and alternative metrics be eventually as valued as peer reviewed articles and proprietary bibliometrics? How much of these altmetrics are truly accessible and for free, and what would be the alternatives to potential limitations?
  • What is the current perception of direct publishing and open peer review, whether by individuals, groups or institutions? What are the risks and opportunities for the production of high quality research?

Event Q&A


Q: Have you considered adding LinkedIn to the Social Web tracking? I am starting to see increasing traffic in my own LinkedIn re: research article sharing

A. Fenner: #1 We haven't considered LinkedIn yet, but are constantly adding new sources. Reddit is probably the source we are missing that generates the most traffic to PLOS journal articles.

A. Priem: Yep, that's on the roadmap! We're still looking into whether it's even possible; the LinkedIn ecosystem is pretty locked-down. But there's lots of potential there, as some of our research suggests a large percentage of scholars are on LinkedIn (70% in one study we did). Of course, not all of them or even most of them are sharing articles there, but still. Even just counting connections would be really interesting.

Q: Is Zotero ever used to measure metrics?

A. Priem: I'd love to use Zotero in ImpactStory but they don't have a public search API we can use for that. They've indicated they might make one if there's demand…so: ask them…maybe it'll happen and if it does it'll be in ImpactStory.

Q: Jason, could you please share the chart that indicated public vs. scholarly again?

A. Priem: Sure, glad you're interested! It's in this blog post, along with a bit of extra background: http://blog.impactstory.org/2012/09/14/31524247207/

Q: Seems to me that this type of data could be easily manipulated using people's social networks.

A. Fenner: Gaming and data manipulation are indeed bigger issues for usage and social web data compared to traditional citations. PLOS has implemented a monitoring tool to detect at least some of these manipulations.

A. Priem: Manipulated: yes. Easily: depends. There are certainly ways to catch a lot of gaming if you look, as Martin mentioned in his presentation. "Algorithmic forensics" helps us notice inconsistencies in data that humans couldn't, and it works surprisingly well--especially when we can cross-check multiple metrics sources, and when we have provenance for all the activity. Google is a great example; it's easy in theory to manipulate your PageRank using "black-hat SEO" techniques like link farms, but Google's algorithmic immune system is pretty robust against this in practice. The fact that Google's stays useful despite millions of dollars at stake for successful gamers is encouraging for altmetrics, I think.

Of course, there will still be gaming of altmetrics, just like there's gaming of citation metrics. But I think we can make it so expensive and dangerous that it's easier for most folks to just do good scholarship. Of course, a lot of this is hypothetical; right now, these metrics are new enough that there's little incentive to game them at all.

Q: What data have you seen on how altmetrics are finding their way into decisions such as the selection of research partners, solicitation of an author to submit for publication or peer review, the granting of funding, or academic tenure?

A. Priem: I think I partly answered this one in the session Q & A. It's very, very early for altmetrics to be used as major factors in decisions like tenure and granting. There is just still so much research to be done here. Remember that citation-based metrics have taken decades to establish their value in evaluation, and many still protest their use. 

That said, this altmetrics data is out there right now, and it's building pictures--fuzzy though they may be--of impact that are richer, broader, and more nuanced than anything we've ever seen before. Right now, today, altmetrics can mean the difference between a story and "data-supported story", and that's a pretty important difference.

Perhaps most importantly in the short term, growing awareness and use of altmetrics is driving conversations about impact--about what it is, what it could be, and what we should be rewarding. I think those conversation are long overdue, and I think we're going to increasingly see them bear fruit. We're already seeing anecdotal examples of this (http://sciencecareers.sciencemag.org/career_magazine/previous_issues/articles/2012_11_09/caredit.a1200124, for example), and I think that's really exciting.

Q: What is "forked?" 

A. Priem: In code, you make a new codebase using an old one as a base, and then work on it independently. Sometimes you then merge your changes back into the thing your forked from. It's really common on the increasingly important open-source code repository GitHub.

A. Wassef: "To fork" is to take the beginning of something (let's say a conversation, an article, computer code etc.) to build on it and to steer it in another direction. You can also Imagine a tree, its trunk and it's many branches.

On the Internet, forking is mainly used in contexts that would allow you to do so, i.e open source initiatives, content created under a Creative Commons license or collaborative work in general. Here's a concret example: a group of computer programmers discover open source code that would be useful to an application they are about to build. They will take that code and "fork it", meaning they will add to it and have it perform new results, etc.

So in the broad context of scholarly communication and research, it's easy to imagine this happening. For instance, a dataset like the one Martin Fenner showed us yesterday could be "forked" in so many directions

Q: While we're building support and standards for altmetrics, how can I use almetrics (as a publisher) today?

Fenner: We have enough support and standards today to use altmetrics to tell data-driven stories and for business intelligence. To use altmetrics as a publisher you can a) talk to one of the service providers (ImpactStory or altmetric.com<http://altmetric.com>), or install open source altmetrics software (ImpactStory or PLOS).

Priem: Why, I'm glad you asked :) The best way for a publisher to use altmetrics today is to embed open, transparent, contextualized metrics from ImpactStory! Here's a mockup of us embedded on a BMC article (http://i.imgur.com/BiTKY.png); we're about to roll out a one-line javascript widget that'll make dropping ImpactStory badges like this into existing articles a snap. Users can click these badges to get more information and context, just like on impactstory.org. There's also altmetric.com and PLOS ALM, which both do similar things, although I'm of course a bit partial to the one I'm building, for reasons I discussed a bit in my preso: openness, context, and normalization.



If paying by credit card, register online.

If paying by check, please use this PDF form.

Registration closes on November 14, 2012 at 12:00 pm Eastern.

Registration Costs

SAVE! Register for multiple events.

  • NISO Member
    • $89.00 (US and Canada)
    • $104.00 (International)
  • NASIG Member
    • $89.00
  • Non-Member
    • $119.00 (US and Canada)
    • $144.00 (International)
  • Student
    • $49.00

Additional Information

  • Registration closes at 12:00 pm Eastern on November 14, 2012. Cancellations made by November 7, 2012 will receive a refund, less a $20 cancellation fee. After that date, there are no refunds.
  • Registrants will receive detailed instructions about accessing the webinar via e-mail the Monday prior to the event. (Anyone registering between Monday and the close of registration will receive the message shortly after the registration is received, within normal business hours.) Due to the widespread use of spam blockers, filters, out of office messages, etc., it is your responsibility to contact the NISO office if you do not receive login instructions before the start of the webinar.
  • Registration is per site (access for one computer) and includes access to the online recorded archive of the webinar. If you are registering someone else from your organization, either use that person's e-mail address when registering or contact the NISO office to provide alternate contact information.
  • Webinar presentation slides and Q&A will be posted to the site following the live webinar.
  • Registrants will receive access information to the archived webinar following the event. An e-mail message containing archive access instructions will be sent within 48 hours of the event.