Occupy Impact; Measuring the impact of a highly diverse research community is a hard problem. Why not make it a challenge we own collectively?

See on Scoop.itDual impact of research; towards the impactelligent university

Whether you call it the impact agenda, a necessary evil, or a key tool to drive research investment – impact measurement is here to stay. But it is imperfect and there are problems to solve. What are we measuring? What are we missing? How do we strike a balance between “the countable” and “the uncountable”? How to we factor all impacts and not just those upon broader society? How do we absorb the added burden this brings to reporting?

We (researchers, innovators, funders, universities) need to collaborate to solve these problems in an informed manner. Otherwise we risk less informed solutions.

This event aims to be part of the solution. We are calling for papers and presentations in any of the following topics:

What is the right balance between numbers and narratives? Are we measuring what we value or valuing what we measure? How do different fields and disciplines differ in impact/influence? Is attribution possible? Is our only impact that which can be measured? Are the current measures sensitive enough or too blunt an instrument?

Measuring the impact of a highly diverse research community is a hard problem but one that we need to ‘occupy’ together as a community in order to make it something we own collectively.

The CASRAI community is a unique blend of policy-makers and policy-implementers in the research and innovation domain.

re/connect: The Annual International CASRAI Conference to Connect Research – October 10-12, 2012 • Montreal, QC…

See on www.verney.ca

Counting citations in the field of business and management: why use Google Scholar rather than the Web of Science

See on Scoop.itDual impact of research; towards the impactelligent university

Research assessment carries important implications both at the individual and institutional levels. This paper examines the research outputs of scholars in business schools and shows how their performance assessment is significantly affected when using data extracted either from the Thomson ISI Web of Science (WoS) or from Google Scholar (GS). The statistical analyses of this paper are based on a large survey data of scholars of Canadian business schools, used jointly with data extracted from the WoS and GS databases. Firstly, the findings of this study reveal that the average performance of B scholars regarding the number of contributions, citations, and the h-index is much higher when performances are assessed using GS rather than WoS. Moreover, the results also show that the scholars who exhibit the highest performances when assessed in reference to articles published in ISI-listed journals also exhibit the highest performances in Google Scholar. Secondly, the absence of association between the strength of ties forged with companies, as well as between the customization of the knowledge transferred to companies and research performances of B scholars such as measured by indicators extracted from WoS and GS, provides some evidence suggesting that mode 1 and 2 knowledge productions might be compatible. Thirdly, the results also indicate that senior B scholars did not differ in a statistically significant manner from their junior colleagues with regard to the proportion of contributions compiled in WoS and GS. However, the results show that assistant professors have a higher proportion of citations in WoS than associate and full professors have. Fourthly, the results of this study suggest that B scholars in accounting tend to publish a smaller proportion of their work in GS than their colleagues in information management, finance and economics. Fifthly, the results of this study show that there is no significant difference between the contributions record of scholars located in English language and French language B schools when their performances are assessed with Google Scholar. However, scholars in English language B schools exhibit higher citation performances and higher h-indices both in WoS and GS. Overall, B scholars might not be confronted by having to choose between two incompatible knowledge production modes, but with the requirement of the evidence-based management approach. As a consequence, the various assessment exercises undertaken by university administrators, government agencies and associations of business schools should complement the data provided in WoS with those provided in GS.



Counting citations in the field of business and management: why use Google Scholar rather than the Web of Science

Nabil Amara and Réjean Landry


2012, DOI: 10.1007/s11192-012-0729-2

See on www.springerlink.com

From bench to bedside: The societal orientation of research leaders

See on Scoop.itDual impact of research; towards the impactelligent university

This paper answers five questions about the societal impact of research. Firstly,  the opinions of research group leaders about the increased emphasis on societal impact, i.e. does it influence their research agenda, communication with stakeholders, and knowledge dissemination to stakeholders? Furthermore, the quality of their societal output. The authors also study whether the societal and scholarly productivity of academic groups are positively or negatively related. In addition, they investigate which managerial and organisational factors (e.g. experience of the principal investigator, group size and funding) influence societal output. Finally, they show for one case (virology) that societal impact is also visible through indirect links. The study shows that research group leaders have a slightly positive attitude towards the increased emphasis on the societal impact of research. The study also indicates a wide variety of societal-oriented output. Furthermore, the societal and scientific productivity of academic groups are unrelated, suggesting that stimulating social relevance requires specific organisational and contextual interventions.



From bench to bedside: The societal orientation of research leaders: The case of biomedical and health research in the Netherlands

Science and Public Policy (2012) 39 (3): 285-303. doi: 10.1093/scipol/scr003

Inge van der Weijden1,

Maaike Verbree2,3 and

Peter van den Besselaar3,*

– Author Affiliations

1Centre for Science and Technology Studies, Leiden University, CWTS, Wassenaarseweg 62-A, 2333 AL Leiden, The Netherlands, 2Department of Economics and Management, University of Applied Sciences Utrecht, Department of Economics and Management, Padualaan 101, 3584 CH Utrecht, The Netherlands and 3Department of Organization Science and Network Institute, VU University, Department of Organization Sciences, De Boelelaan 1081, 1081 HV Amsterdam, The Netherlands

See on spp.oxfordjournals.org

Newer indices measuring scholarly author impact

See on Scoop.itDual impact of research; towards the impactelligent university

Newer indices measuring scholarly impact

1) Age-weighted citation rate (AWCR, AWCRpA) & AW-index

Inspired by Jin’s The AR-index: complementing the h-index, the AWCR is an age-weighted citation rate where # of citations for a paper is divided by how old it is. Jin defines the AR-index as the square root of the sum of all age-weighted citation counts over all papers that contribute to the h-index. In Publish or Perish, papers are summed over as these represent the impact of the total body of work of a scholar. (This allows younger and less-cited papers to contribute to AWCR even though they may not yet contribute to the h-index.)


2) Contemporary h-index

Proposed in Generalized h-index for disclosing latent facts in citation networks, this index aims to improve on the h-index by giving more weight to recent articles, thus rewarding academics who maintain a steady level of activity. Age-related weighting is parametrized; the Publish or Perish implementation uses gamma=4 and-lta=1, like the authors did for their experiments. This means that for an article published during the current year, its citations account four times. For an article published 4 years ago, its citations account only one time. For an article published 6 years ago, its citations account 4/6 times, and so on.


3) Eigenfactor

Eigenfactor.org is an academic research project at the University of Washington. Developed by West and Bergstrom, the Eigenfactor is a rating of the total importance of a scientific journal. Eigenfactor is reminiscent of Google’s Pagerank algorithm in that journals are rated according to “link love” or the number of incoming citations. Moreover, citations from highly-ranked journals are weighted higher than poorly-ranked. An Eigenfactor score rises with the total impact of a journal. Therefore, journals that generate a higher impact in the field have a larger (or higher) Eigenfactor score.

Eigenfactor is also used in network analysis to develop methods to evaluate the influence of scholarly journals and map academic outputs in various disciplines.


4) Egghe’s g-index

In the Theory and practice of the g-index, Egghe aims to improve on the h-index by giving more weight to highly-cited articles. The g-index is an index for quantifying scientific productivity based on publications and calculated based on the distribution of citations received by a given researcher’s publications. So, given a set of articles ranked in decreasing order of the number of citations that they receive, the g-index is the (unique) largest number such that the top g articles received (together) at least g2 citations.


5) E-index

The e-index, complementing the h-index for excess citations is the square root of the surplus of citations in the h-core beyond h^2. One of the aims of the e-index is to differentiate between scientists with identical h-indices but different citations. Another advantage of the e-index is that it can reflect the contributions of highly cited papers of an author, as usually ignored by the h-index. Zhang says that the e-index “is a necessary h-index complement, especially for evaluating highly cited scientists or for precisely comparing the scientific output of a group of scientists having an identical h-index.”


6) Google’s I10-index

The I10-index indicates the # of papers an author has written that have been cited at least ten times by other scholars. It was introduced by Google in 2011 as part of their work on Google scholar, a search tool that locates academic and related papers.


7) Hirsch’s h-index

see also H-b index

In An index to quantify an individual’s scientific research output, Hirsch aims to provide a single-number metric of an academic’s impact, combining quality with quantity.The H-factor is a measure of impact of individual scientists in their respective fields. When one scientist publishes n articles and is cited n times, an H-factor of n results. This rewards publication of many good articles but few poor ones. It is difficult to increase someone’s H-factor by self-citation (a common problem). One or a few lucky “hits” will alone not improve your H-factor. H-factors become reliable once you have a substantial production of research output. It is important to emphasize that a single number cannot describe a scientist and the H-factor is only one measure of the impact of scholars.Since Hirsch introduced the h index in 2005, this measure of academic impact has garnered widespread interest as well as proposals for other indices based on analyses of publication data such as the g index, h (2) index, m quotient, r index, to name a few. Several commonly used databases, such as Elsevier’s SciVerse Scopus, Thomson Reuters’ Web of Science, Google Scholar’s Citations and Microsoft’s Academic Search, provide h-index values for authors.

Automated computation of the h-index:Quadsearch – http://quadsearch.csd.auth.gr/index.php?lan=1&s=2H-View Visualizer – http://hview.limsi.fr/


8) Individual h-index

Proposed in Is it possible to compare researchers with different scientific interests?, this index divides the standard h-index by the average number of authors in articles that have contributed to the h-index calculation in order to reduce the effect of co-authorship. See also Rad AE, Brinjikji W, Cloft HJ, Kallmes DF. The h-index in academic radiology. Acad Radiol. 2010 May 14.


9) R-Impact

The Reliability-Based Citation Impact Factor seeks to quantify a journal’s effectiveness, and incorporates citation data over the journal’s lifespan instead of more recent performance histories. see Kuo W, Rupe J. R-Impact: reliability-based citation impact factor. IEEE Transactions on Reliability. 2007;56(3):366-367.


10) Universal h-index

In the Universality of citation distributions: toward an objective measure of scientific impact, the tagging of authors with disciplines allows a Tenurometer to compute a new universal h-index. The universal h-index allows researchers to compare the impact of authors in different disciplines with different citation patterns.


11) ‘w-index’ or Wu Index

In The w-index: a significant improvement of the h-index, Wu’s index is described as similar to the h-index. According to Hirsch’s criteria, a researcher with an h-index of 9 indicates that he or she has published at least 9 papers, each of which has been cited 9 or more times. The ‘w-index’ indicates that a researcher has published w papers, with at least 10w citations each. A researcher with a w-index of 24 means he or she has 24 papers with at least 240 citations each.

Wu says his index is an improvement on the h-index as it “accurately reflects the influence of a scientist’s top papers”. He says it should be called the “10h-index”. The w-index is easy to calculate using the Web of Knowledge, Scopus (Elsevier) or Google scholar and in the same way as the h-index by searching for a researcher’s name and listing all of their papers in order with the highest cited papers cited first.


Source: HLWIKI Canada



See on hlwiki.slais.ubc.ca

Multidimensional Journal Evaluation Analyzing Scientific Periodicals beyond the Impact Factor

See on Scoop.itDual impact of research; towards the impactelligent university

Scientific communication depends primarily on publishing in journals. The most important indicator to determine the influence of a journal is the Impact Factor. Since this factor only measures the average number of citations per article in a certain time window, it can be argued that it does not reflect the actual value of a periodical. This book defines five dimensions, which build a framework for a multidimensional method of journal evaluation. The author is winner of the Eugene Garfield Doctoral Dissertation Scholarship 2011.


Haustein, Stefanie

Multidimensional Journal Evaluation

Analyzing Scientific Periodicals beyond the Impact Factor. Gegruyter, 2012. ISBN:978-3-11-025555-3

Contents: http://www.degruyter.com/view/supplement/9783110255553_Contents.pdf

See on www.degruyter.com

Scientific Mobile Applications: mobile apps for science

See on Scoop.itDual impact of research; towards the impactelligent university

Mobile apps for science are expanding in scope and capability very quickly, yet there is no easy way to source information regarding what is available, what the community thinks of these apps (in terms of general reviews) and clustering of these apps into functional groupings. That is the intention of this wiki. It is a community resource for developers and users to share information about the various science apps that are available.


See on www.scimobileapps.com

From ‘Ivory Tower Traditionalists’ to ‘Entrepreneurial Scientists’?

See on Scoop.itDual impact of research; towards the impactelligent university

Growing intensity of university-industry ties has generated an intense debate about the changing norms and practices of academic scientific work. This study challenges the protagonists’ views on the emergence of a dominant market ethos in academic science and growing influence of the ‘new school’ entrepreneurial scientists. It argues that academic scientists are active agents seeking to shape the relationships between science and business, and shows continued diversity in their work orientations. Drawing on neo-institutional theory and the notion of ‘boundary work’, the study examines how scientists seek to protect and negotiate their positions, and also make sense of their professional role identities. It identifies four different orientations: the ‘traditional’ and ‘entrepreneurial’, with two hybrid types in between. The hybrids are the dominant category and are particularly adept at exploiting the ambiguities of ‘boundary work’ between academia and industry. The study is based on 36 interviews and a survey sample of 734 academic scientists from five UK research universities.


Source: From ‘Ivory Tower Traditionalists’ to ‘Entrepreneurial Scientists’?

Academic Scientists in Fuzzy University—Industry Boundaries

Alice Lam, School of Management, Royal Holloway University of London, Egham, SurreyPublished online before print February 18, 2010, doi: 10.1177/0306312709349963 Social Studies of Science April 2010 vol. 40 no. 2 307-340

See on sss.sagepub.com

RSS Business School News

  • An error has occurred; the feed is probably down. Try again later.
Locations of Site Visitors

Altmetric Related stuf

if(typeof(AltmetricWidget) !== 'undefined'){ window.AltmetricWidget489 = new AltmetricWidget("AltmetricWidget489"); }}

%d bloggers like this: