Twitter and your academic reputation: friends or enemies?

trial by twitter

The feedback from social media like twitter can strike as fast as lightning, with consequences unforseen. For many researchers, the pace and tone of this online review can be intimidating — and can sometimes feel like an attack. How do authors best deal with these forms of peer review after publication? The speed of this “social peer review” is much faster than the time that was needed for the peer review process during submission and acceptance of the paper as part of the publishing process in (top) academic journals. and the feedback comes from anywhere, not just the circle of accepted experts in the field like with the blind review process of journals.

The result of this can be of enormous impact on your academic reputation. What if suddenly thousands of tweets disapprove of the conclusions of a paper just published? It will create a situation that is nearly impossible to handle for the author of the work. There will be a “negative sentiment” around the publications that will be of influence on aspects of your reputation. For example the chances your paper will be cited often. How will this social sentiment be of influence to other papers in the pipeline and under submission with (top) journals? How will the social sentiment be of influence to your co-authors? How will it influence the chances of your grant applications? How ill it influence your tenure process if the sentiment is negative? These are all huge stakes for researchers.

A recent article by Apoorva Mandavilla in Nature deals with this issue.  It is about “fast feedback”, “a chorus of (dis) approval”, “Meta-twitters ‘,  ‘new (alt) metrics of communication” and some possible solutions for the situation.

The possible power of social media for research and academic reputation is evident for me. The management of the communication and speed of the feedback needs special skills and special publication strategies by researchers (and institutes!) who care about their future careers and their reputation. The open social media review dynamics  at networks like twitter currently has many risks for the author of the paper. But at the same time the stakes and risks for the crowd who collectively performs these “trials”  is very low I guess. A single tweet is not powerful, but the flog together is impact full. It is a collective review of the crowd, with often a lot of people who just follow the sentiment by simply re-tweeting others.

I advice researchers to be very careful about which message on their paper is distributed in social networks, how this is distributed, by whom it is distributed and who is replying on it.  The social networks should not reproduce or copycat the formal peer review process by selected experts. They should be focused on adding value to the possible additional virtues of the work. The best approach might be to leverage the social media by initiating stories on possible practical values and practical impact of the research. Because when these  are confirmed in the wider social network audiences, the author can get confidence that the practical / managerial value of his research is valued and tested  immediately.  In this way the social networks can be very beneficial for the academic reputation; they perform a sounding board for testing managerial / practical value of the research.

New peer review guide published by the Research Information Network

Peer review: good for all purposes? | Research Information Network.

Peer review is both a principle and a set of mechanisms at the heart of the arrangements for evaluating and assuring the quality of research. A new guide from the Research Information Network provides for researchers and others an outline of how the peer review system works, and highlights some of the challenges as well as the opportunities it faces in the internet age.

Peer review: A guide for researchers sets out the processes involved in peer review for both grant applications and publications. It also looks at the issues that have been raised in a series of recent reports on the costs of the system, and how effective and fair it is.

The growth in the size of the research community and of the volumes of research being undertaken across the world means that the amount of time and effort put into the peer review system is growing too, and that it is coming under increasing scrutiny. The guide looks at how effective peer review is in selecting the best research proposals, as well as in detecting misconduct and malpractice.

The guide also looks at how fair the system is, and at the different levels of transparency involved in the process: from completely closed systems, where the identities of reviewers and those whose work is being reviewed are kept hidden from each other, and reports are not revealed, to completely transparent systems where identities and reports are openly revealed.

The burdens on researchers as submitters and reviewers are by far the biggest costs in the peer review system, and the guide outlines some of the measures that are being taken to reduce those burdens, or at least to keep them in check. A growing number of researchers are taking the view that they should be paid for the time they spend in reviewing grant applications and draft publications. But there are also concerns that such payment would significantly increase the costs of the system, and also of scholarly publications.

The internet has speeded up the process of peer review, and widened the pool of reviewers who can be drawn on. It has also provided new channels through which researchers can communicate their findings, and through which other researchers can comment on, annotate and evaluate them. These new opportunities bring new challenges as well. The take-up of the opportunities for open comments, ratings and recommender systems has been patchy to date; and we currently lack clear protocols for the review of findings circulated in multiple formats, including blogs and wikis. The mechanisms for peer review will undoubtedly change in coming years, but the principle will remain central to all those involved in the research community.

Peer review: A guide for researchers is available at www.rin.ac.uk/peer-review-guide.

Finally a good initiative: ORCID: Open Researcher Contributor Identification Initiative

ORCID: Open Researcher Contributor Identification Initiative – Home.

Name ambiguity and attribution are persistent, critical problems imbedded in the scholarly research ecosystem. The ORCID Initiative represents a community effort to establish an open, independent registry that is adopted and embraced as the industry’s de facto standard. Our mission is to resolve the systemic name ambiguity, by means of assigning unique identifiers linkable to an individual’s research output, to enhance the scientific discovery process and improve the efficiency of funding and collaboration. Accurate identification of researchers and their work is one of the pillars for the transition from science to e-Science, wherein scholarly publications can be mined to spot links and ideas hidden in the ever-growing volume of scholarly literature. A disambiguated set of authors will allow new services and benefits to be built for the research community by all stakeholders in scholarly communication: from commercial actors to non-profit organizations, from governments to universities.

And related news “knowledge speak:

http://www.knowledgespeak.com/newsArchieveviewdtl.asp?pickUpBatch=1321&pickUpID=9303

Research community members seek to resolve author name ambiguity issue07 Dec 2009

Various members of the research community have announced their intent to collaborate to resolve the existing author name ambiguity problem in scholarly communication. Together, the group hopes to develop an open, independent identification system for scholarly authors. This follows the first Name Identifier Summit held last month in Cambridge, MA, by Thomson Reuters and Nature Publishing Group, where a cross-section of the research community explored approaches to address name ambiguity. A follow-on meeting of this group took place in London last week to discuss the next steps.

Accurate identification of researchers and their work is seen as key for the transition from science to e-science, wherein scholarly publications can be mined to spot links and ideas hidden in the growing volume of scholarly literature. A disambiguated set of authors will allow new services and benefits to be built for the research community by all stakeholders in scholarly communication: from commercial actors to non-profit organisations, from governments to universities.

The organisations that have agreed to work together to overcome the contributor identification issue include: American Institute of Physics, American Psychological Association, Association for Computing Machinery, British Library, CrossRef, Elsevier, European Molecular Biology Organisation, Hindawi, INSPIRE (project of CERN, DESY, Fermilab, SLAC), Massachusetts Institute of Technology Libraries, Nature Publishing Group, Public Library of Science, ProQuest, SAGE Publications Inc., Springer, Thomson Reuters, University College London, University of Manchester (JISC Names Project), University of Vienna, Wellcome Trust and Wiley-Blackwell.

A New Era in Citation and Bibliometric Analyses: Web of Science, Scopus, and Google Scholar

Lokman I. Meho and Kiduk Yang
School of Library and Information Science, Indiana University, 2007

Abstract:

Academic institutions, federal agencies, publishers, editors, authors, and librarians increasingly rely on citation analysis for making hiring, promotion, tenure, funding, and/or reviewer and journal evaluation and selection decisions. The Institute for Scientific Information’s (ISI) citation databases have been used for decades as a starting point and often as the only tools for locating citations and/or conducting citation analyses. ISI databases (or Web of Science), however, may no longer be adequate as the only or even the main sources of citations because new databases and tools that allow citation searching are now available. Whether these new databases and tools complement or represent alternatives to Web of Science (WoS) is important to explore. Using a group of 15 library and information science faculty members as a case study, this paper examines the effects of using Scopus and Google Scholar (GS) on the citation counts and rankings of scholars as measured by WoS. The paper discusses the strengths and weaknesses of WoS, Scopus, and GS, their overlap and uniqueness, quality and language of the citations, and the implications of the findings for citation analysis. The project involved citation searching for approximately 1,100 scholarly works published by the study group and over 200 works by a test group (an additional 10 faculty members). Overall, more than 10,000 citing and purportedly citing documents were examined. WoS data took about 100 hours of collecting and processing time, Scopus consumed 200 hours, and GS a grueling 3,000 hours.

Conclusions by the authors:

The study found that the addition of Scopus citations to those of WoS could significantly alter the ranking of scholars. The study also found that GS stands out in its coverage of conference proceedings as well as international, non-English language journals, among others. GS also indexes a wide variety of document types, some of which may be of significant value to researchers. The use of Scopus and GS, in addition to WoS, reveals a more comprehensive and accurate picture of the extent of the scholarly relationship between LIS and other fields, as evidenced by the unique titles that cite LIS literature (e.g., titles from Cognitive Science, Computer Science, Education, and Engineering, to name only a few). Significantly, this study has demonstrated that:

  1. Although WoS remains an indispensable citation database, it should not be used alone for locating citations to an author or title, and, by extension, journals, departments, and countries; Scopus should be used concurrently.
  2. Although Scopus provides more comprehensive citation coverage of LIS and LIS-related literature than WoS for the period 1996-2005, the two databases complement rather than replace each other.
  3. While both Scopus and GS help identify a considerable number of citations not found in WoS, only Scopus significantly alters the ranking of scholars as measured by WoS.
    Although GS unique citations are not of the same quality as those found in WoS or Scopus, they could be very useful in showing evidence of broader international impact than could possibly be done through the two proprietary databases.
  4. GS value for citation searching purposes is severely diminished by its inherent problems. GS data are not limited to refereed, high quality journals and conference proceedings. GS is also very cumbersome to use and needs significant improvement in the way it displays search results and the downloading capabilities it offers for it to become a useful tool for large-scale citation analyses.
  5. Given the low overlap or high uniqueness between the three tools, they may all be necessary to develop more accurate maps or visualizations of scholarly networks and impact both within and between disciplines (Börner, Chen, & Boyack, 2003; Börner, Sanyal, & Vespignani, 2006; Small, 1999; White & McCain, 1997).
  6. Each database or tool requires specific search strategy(ies) in order to collect citation data, some more accurately and quickly (i.e., WoS and Scopus) than others (i.e., GS).

(Accepted for publication in the Journal of the American Society for Information Science and Technology)

Author Affiliation Index (AAI); the pattern of authorship/coauthorship across journals

Although this recent study by Chen en Huan (Journal of Corporate Finance 2007) is focused on the field of Finance, the concept of AAI is valuable for all fields of management research. The AAI is calculated as the ratio of articles authored by faculty at the world’s top 80 finance programs divided by the total number of articles by all authors. It provides provides academics with a credible alternative measurement of journal quality, in ddition to the traditional survey-based and citation-based journal ratings.

Abstract:

In this paper we use a new method to rank finance journals and study the pattern of authorship/coauthorship
across journals. Defined as the ratio of articles authored by faculty at the world’s top 80 finance
programs to the total number of articles by all authors, the Author Affiliation Index is a cost-effective and
intuitively easy-to-understand approach to journal rankings. Forty-one finance journals are ranked
according to this index. If properly constructed, the Author Affiliation Index provides an easy and credible
way to supplement the existing journal ranking methods. Our ranking system reveals the journal–researcher
clientele, and we find that collaboration (co-authoring) between faculty within elite programs exists only in
top-tier and near-top-tier journals. Publications in lower-tier journals by researchers of elite programs are
driven by their co-authors. Collaboration between faculty in elite and non-elite programs, however, is more
prevalent than that within elite programs across all tiers of journals. Co-authorship among top 80 programs,
nevertheless, is more common in top-tier journals, while co-authorship between top 80 and other programs
is more dominant in lower-ranked journals.

RSS Business School News

  • An error has occurred; the feed is probably down. Try again later.
Locations of Site Visitors
%d bloggers like this: