Academia Insider

What Is Good H-Index? H-Index Required For An Academic Position

In the academic world, the h-index score stands as a pivotal metric, gauging the impact and breadth of a researcher’s work. Understanding what constitutes a good h-index is crucial for academics at all stages, from budding PhD students to seasoned professors.

This article looks into the h-index, exploring what scores are considered impressive across various disciplines and career stages.

  • PhD Student: An h-index between 1 and 5 is typical for PhD students nearing the end of their program, reflecting their early stage in academic publishing.
  • Postdoc and Assistant Professor: Early career researchers like postdoctoral fellows or assistant professors often find an h-index around 5 to 10 impressive, indicating a solid start in their respective fields.
  • Associate Professor: At this more advanced stage, an h-index of 10 or more is generally expected, reflecting a consistent record of impactful research.
  • Full Professor: For full professors, an h-index of 15 or higher is often seen, indicating a long and impactful career in research and academia.

How To Calculate Your H-Index Score?

In the academic world, the h-index score is a critical metric, essentially acting like a report card for scholars.

h index of phd student

The h-index is a measure of a researcher’s productivity and impact. H-index was designed to assess the number of papers published and the number of citations each paper receives. 

Now that you know what is a h-index score, you may now wonder if you can find out your own. Good thing is that platforms like Google Scholar or Web of Science can come in handy.

They track your number of publications and the number of times those publications are cited, crunching these numbers into your h-index.

This number can vary based on the field and years of research experience. A full professor might be expected to have a higher h-index, reflecting more years of impactful research.

Google Scholar

To find out your h-index score from Google Scholar, you can follow the steps below:

  • Create a Google Scholar Profile : If you don’t already have one, go to Google Scholar and create a profile. Fill in your academic details and affiliations.
  • Add Publications : Ensure all your research publications are listed in your profile. You can add them manually or import them if they are already available on Google Scholar.
  • Verify your Publications : Make sure the publications listed are indeed yours, as sometimes publications from other authors with similar names might appear.
  • Check the Citations Section : Once your profile is complete and updated, look for the ‘Citations’ section on your profile page. This is usually located at the top and easy to spot.
  • Find Your H-Index : In the Citations section, you will see your h-index listed among other citation metrics like the total number of citations and the i10-index.

Web Of Science

To find out your h-index score from Web Of Science, you can follow the steps below:

  • Access Web of Science : Go to the Web of Science website. Access may require an institutional login, depending on your affiliation.
  • Search for Your Name : Use the author search function to find your publications. Ensure you search with variations of your name if you’ve published under different names or initials.
  • Create a Citation Report : Once your publications are listed, select them and create a citation report. This option is typically found above the list of your publications.
  • View Your H-Index : In the citation report, your h-index will be displayed. This number is calculated based on the total number of papers you’ve published and the number of citations each paper has received.

What H-Index Is Considered Good For A PhD Student?

For a PhD student, the world of academic metrics can be daunting, especially when it comes to the h-index, a measure that intertwines the number of publications with their citation impact.

So, what h-index score should you, as a PhD student, aim for?

A “good” h-index can vary based on your field of study and the stage of your PhD program.

Generally, for PhD students, a lower h-index is expected and completely normal. You’re just beginning your journey in academic publishing.

h index of phd student

An h-index between 1 and 5 might be typical for students nearing the end of their PhD. This means you have 1 to 5 publications that have been cited at least 1 to 5 times, respectively.

Your h-index can be calculated using tools like Google Scholar or Web of Science. These platforms track your published papers and the number of citations each receives.

As a PhD student, your focus should be on publishing quality research in reputable journals, as this will gradually increase your h-index.

Remember, while a higher h-index is beneficial for future academic positions, it’s not the only metric that matters. Your research’s quality, relevance, and impact in your field are equally important. A single highly influential paper might open more doors than several less impactful ones.

What Are Good H-Index Required For An Academic Position?

your h-index can be as crucial as your research itself. This metric, a blend of productivity and impact, is often scrutinized by hiring committees.

But what number should you aim for? A good h-index varies by field and career stage.

PostDoc, Assistant Professors

h index of phd student

For early career researchers, like postdoctoral fellows or assistant professors, an h-index around 5 to 10 is often impressive.

It shows you’ve made a mark in your field, with a number of papers that have been cited at least that many times. 

Associate Professor, Full Professor

In more senior roles, such as a tenured associate professor or full professor, expectations rise.

Here, an h-index of 10 or 15 might be the minimum, with higher numbers not uncommon.

This single number, while important, doesn’t tell the whole story. A young researcher might have a lower h-index simply due to less time in the field. Moreover, some fields tend to have higher citation rates, which can inflate h-index scores.

It’s wise to keep an eye on your h-index, especially if you’re eyeing:

  • Competitive academic positions,
  • Research funding
  • Collaboration opportunities.

Improving your h-index involves not just publishing papers, but ensuring they are of high quality and relevance, increasing the likelihood of citations.

In sum, a good h-index is one that matches your career stage and field, reflecting both the quantity and impact of your work. However, it’s not the sole measure of your worth as a researcher.

The breadth and depth of your contributions, beyond just citation counts, also paint a vivid picture of your academic and scientific impact.

What Metric Influences H-Index Score?

Your h-index score is influenced by several key factors:

  • Number of Publications : The more papers you publish, the greater the potential for citations. It’s a numbers game, but quality over quantity should be your mantra. High-caliber papers in respected journals often garner more attention and citations.
  • Citations Per Publication : Your h-index heavily relies on how often your papers are cited. Even if you have a plethora of publications, your h-index won’t shine if they’re seldom cited.
  • Years of Research Experience : A young researcher might have a lower h-index compared to a full professor, who has had more time to build their citation record.
  • Research Field : The h-index varies widely across disciplines. Fields with rapid publication and citation rates like biomedical sciences often see higher h-index scores than, say, humanities. So, a good h-index in one field might be considered low in another.
  • Access to Research Collaborations : Collaborations can boost your h-index. Working with other researchers, can increase the visibility and citation potential of your papers. However, too many authors on a single paper might dilute the perceived contribution of each.

Remember, while a high h-index can be indicative of a significant academic impact, it’s not the sole measure of your scientific worth. It’s a good idea to give your h-index some consideration, but also focus on the broader spectrum of your academic contributions.

How To Increase H-Index Score?

Increasing your h-index, a metric reflecting the impact and productivity of your academic work, is a strategic goal for many researchers.

This single number, representing the intersection of the quantity of your publications and their citation impact, can play a pivotal role in securing research grants and academic positions.

To boost your h-index, focus on publishing quality research in well-regarded journals. A paper published in a respected journal is more likely to be cited, and each citation nudges your h-index upwards.

For example, if you’re an assistant professor with an h-index of 5, aiming for journals with high visibility in your field can help you reach a higher h-index, making you more competitive for positions like associate or full professor.

Collaboration is another key strategy. Co-authoring with established researchers can increase the reach and citation potential of your papers.

This, however, comes with a caveat: the more number of authors on a paper, the more diluted your perceived contribution might be. Aim for a balance in co-authorship.

Active engagement in the academic community also matters. Increase citations on your work by:

  • Presenting at conferences,
  • networking, and
  • promoting your work on platforms like Google Scholar or Web of Science.

Remember, the h-index varies by field and career stage. A good h-index for a young researcher might be 10, while more senior academics might aim for higher numbers. Using databases like Google Scholar, you can track your number of cited publications and calculate your h-index.

h index of phd student

While a higher h-index can bolster your academic profile, it’s not the sole indicator of your scholarly worth – low h-index score is not a dealbreaker in many cases. It’s wise to consider it alongside other measures of your academic and scientific impact.

Good H-Index Score May Vary

A good h-index score is relative, varying across academic fields and career stages. While it offers a valuable snapshot of a researcher’s impact and productivity, it’s important to view it as one part of a larger picture.

Aspiring for a higher h-index should go hand in hand with maintaining the quality and relevance of research. Ultimately, the h-index is a useful tool, but it’s the depth and innovation of your work that truly define your academic legacy.

h index of phd student

Dr Andrew Stapleton has a Masters and PhD in Chemistry from the UK and Australia. He has many years of research experience and has worked as a Postdoctoral Fellow and Associate at a number of Universities. Although having secured funding for his own research, he left academia to help others with his YouTube channel all about the inner workings of academia and how to make it work for you.

Thank you for visiting Academia Insider.

We are here to help you navigate Academia as painlessly as possible. We are supported by our readers and by visiting you are helping us earn a small amount through ads and affiliate revenue - Thank you!

h index of phd student

2024 © Academia Insider

h index of phd student

  • Discoveries
  • Right Journal
  • Journal Metrics
  • Journal Fit
  • Abbreviation
  • In-Text Citations
  • Bibliographies
  • Writing an Article
  • Peer Review Types
  • Acknowledgements
  • Withdrawing a Paper
  • Form Letter
  • ISO, ANSI, CFR
  • Google Scholar
  • Journal Manuscript Editing
  • Research Manuscript Editing

Book Editing

  • Manuscript Editing Services

Medical Editing

  • Bioscience Editing
  • Physical Science Editing
  • PhD Thesis Editing Services
  • PhD Editing
  • Master’s Proofreading
  • Bachelor’s Editing
  • Dissertation Proofreading Services
  • Best Dissertation Proofreaders
  • Masters Dissertation Proofreading
  • PhD Proofreaders
  • Proofreading PhD Thesis Price
  • Journal Article Editing
  • Book Editing Service
  • Editing and Proofreading Services
  • Research Paper Editing
  • Medical Manuscript Editing
  • Academic Editing
  • Social Sciences Editing
  • Academic Proofreading
  • PhD Theses Editing
  • Dissertation Proofreading
  • Proofreading Rates UK
  • Medical Proofreading
  • PhD Proofreading Services UK
  • Academic Proofreading Services UK

Medical Editing Services

  • Life Science Editing
  • Biomedical Editing
  • Environmental Science Editing
  • Pharmaceutical Science Editing
  • Economics Editing
  • Psychology Editing
  • Sociology Editing
  • Archaeology Editing
  • History Paper Editing
  • Anthropology Editing
  • Law Paper Editing
  • Engineering Paper Editing
  • Technical Paper Editing
  • Philosophy Editing
  • PhD Dissertation Proofreading
  • Lektorat Englisch
  • Akademisches Lektorat
  • Lektorat Englisch Preise
  • Wissenschaftliches Lektorat
  • Lektorat Doktorarbeit

PhD Thesis Editing

  • Thesis Proofreading Services
  • PhD Thesis Proofreading
  • Proofreading Thesis Cost
  • Proofreading Thesis
  • Thesis Editing Services
  • Professional Thesis Editing
  • Thesis Editing Cost
  • Proofreading Dissertation
  • Dissertation Proofreading Cost
  • Dissertation Proofreader
  • Correção de Artigos Científicos
  • Correção de Trabalhos Academicos
  • Serviços de Correção de Inglês
  • Correção de Dissertação
  • Correção de Textos Precos
  • 定額 ネイティブチェック
  • Copy Editing
  • FREE Courses
  • Revision en Ingles
  • Revision de Textos en Ingles
  • Revision de Tesis
  • Revision Medica en Ingles
  • Revision de Tesis Precio
  • Revisão de Artigos Científicos
  • Revisão de Trabalhos Academicos
  • Serviços de Revisão de Inglês
  • Revisão de Dissertação
  • Revisão de Textos Precos
  • Corrección de Textos en Ingles
  • Corrección de Tesis
  • Corrección de Tesis Precio
  • Corrección Medica en Ingles
  • Corrector ingles

Select Page

What Is a Good H-Index Required for an Academic Position?

Posted by Rene Tetzner | Sep 3, 2021 | Career Advice for Academics , How To Get Published | 0 |

What Is a Good H-Index Required for an Academic Position?

What Is a Good H-Index Required for an Academic Position? Metrics are important. Even scholars who may not entirely agree with the ways in which academic and scientific impact is currently measured and used cannot deny that metrics play a significant role in determining who receives research grants, employment offers and desirable promotions. The h-index is only one among various kinds of metrics now applied to the research-based writing of professional scholars, but it is an increasingly significant one. Introduced by the physicist Jorge Hirsch in a paper published in 2005, the h-index was designed to assess the quantity and quality of a scientist’s contributions and predict his or her productivity and influence in the coming years. However, its use and importance have quickly expanded beyond physics and the sciences into a wide variety of disciplines and fields of study. If you are applying for a scientific or academic position, hoping for a promotion or in need of research funding, it will therefore be wise to give your h-index score some consideration, but within reason. In some fields, the h-index and other forms of metrics play a very small part if any in hiring and funding, and there are still many other means used by hiring and funding committees to assess scholarly contributions.

h index of phd student

The h-index is considered preferable to metrics that measure only a researcher’s number of publications or the number of times those publications have been cited. This is because it combines the two, considering both publications and citations to arrive at a particular value. A scholar who has five publications that have been cited at least five times has an h-index of 5, whereas a scholar with ten publications that have been cited ten times has an h-index of 10. Publication and citation patterns differ markedly across disciplines and fields of study, and the expectations of hiring and funding bodies vary depending on the level and type of position and the kind and size of research project, so it is impossible to say exactly what might be considered an acceptable or competitive h-index in a given situation. H-index scores between 3 and 5 seem common for new assistant professors, scores between 8 and 12 fairly standard for promotion to the position of tenured associate professor, and scores between 15 and 20 about right for becoming a full professor. Be aware, however, that these are gross generalisations and actual figures vary enormously among disciplines and fields: there are, for instance, many full professors, deans and chancellors with very low h-index scores, and an exceptional young researcher with an h-index of 10 or 15 might conceivably still be working on a post doctorate.

h index of phd student

As a general rule in many fields, an h-index that matches the number of years a scholar has been working in the field is a respectable score. Hirsch in fact suggested that the h-index be used in conjunction with a scholar’s active research time to arrive at what is known as Hirsch’s individual m. It is calculated by dividing a scientist’s h-index by the number of years that have passed since the first publication, with a score of 1 being very good indeed, 2 being outstanding and 3 truly exceptional. This means that if you have published at least one well-cited document each year since your first publication – a decent textual output by any measure – you are among a successful group of scholars, and if you have published two or three times that number of well-cited documents over the same period of time, you are among the intellectual superstars of your discipline and probably of your time. To put this into perspective, from what I can find online it looks like Stephen Hawking has a score of about 1.6 by this calculation. If you can approach a hiring committee or funding body with anything close to that, you are certainly going to be a serious contender in the competition.

h index of phd student

The h-index as a measure of both the quantity and quality of scholarly achievement is considered quite reliable and robust, so it has proved incredibly popular and is now applied not only to individual researchers, but also to research groups and projects, to scholarly journals and publishers, to academic and scientific departments, to entire universities and even to entire countries. As with all metrics, however, the h-index is subject to a number of biases and limitations, so there are significant problems associated with relying solely on h-index scores when making important research and career decisions. The h-index does not, for example, account for publications with citation numbers far above a researcher’s h-index or distinguish any difference between publications with a single author or many. Older publications are counted exactly as more recent ones are and older scholars benefit, whether they have published anything new in years or not. Neither the length of a publication nor the nature of each citation (positive or negative) is considered, so those measures of quantity and quality are not part of the picture. Early career researchers who take the time to delve deeply into an important problem and eventually produce an excellent article and scholars at any stage in their careers who dedicate time to teaching or practical applications of research will have lower scores than those who crank out mediocre articles based on uninteresting research that is nonetheless cited by their colleagues. Finally, the databases from which the h-index and other metrics are determined vary in the types of documents they consider and the fields of study they include, so the same scholar will not receive the same h value across all of them, and accurate comparison across fields and disciplines is impossible.

These and other problems have generated a number of adjustments that are rather similar to Hirsch’s individual m, which, as discussed above, considers a scholar’s active research time in relation to his or her h-index. The g-index gives greater weight to publications whose citation counts exceed a researcher’s h value; the hi index corrects for the number of authors; the hc index corrects for the age of publications, with recent citations earning more counts; and the c-index considers collaboration distance between the author of a publication and the authors citing it. Solutions for comparison between disciplines and fields have included dividing the h-index scores of scholars by the h-index averages in their respective fields to arrive at results that can be compared, but defining fields can be tricky, and larger fields of study with more researchers naturally generate more citations. The databases used for scholarly metrics are constantly upgrading and broadening their inclusiveness to render metrics like the h-index more truly representative of a researcher’s actual productivity and impact, so the accuracy and consistency of these tools are likely to continue improving. However, no new numbers or calculations can add what all of these metrics lack, and that is research content – the valuable and unique content that makes the publication of research a worthy task in the first place.

Committees gathered to hire or promote faculty or to select the recipients of research grants rarely rely solely on metrics when making their decisions. If they are doing their jobs properly, they combine what they can gather from metrics with other information about candidates and their scholarly impact. They do not just notice how many times the papers of candidates have been cited; they read those papers and consider their content, and they pay attention to the other activities of the scholars they are considering. This wider perspective is appropriate for an applicant as well, so if you are polishing your CV, putting together a grant application or preparing for a job interview, look over your own unique achievements with a kindly yet critical eye and consider them in direct relation to what the job posting or grant regulations indicate is wanted. If you happen to have a wonderful h-index score or any other impressive metrics, by all means flaunt them, and if you fear that a low h value will compromise your career aspirations, do what you can to have your publications with lower citation counts read and used more often, update your profiles on the relevant databases, and publish the type of document sure to garner citations in your field, such as a review article.

Do keep in mind, however, that hiring and funding committees are often looking for far more than large numbers of highly cited publications. Admittedly, they rarely balk at them, but universities are also seeking excellent teachers, advisors and administrators, so play up those skills and any related experience you have, and remember that financial supporters of research may be keen to fund scholars who can successfully manage and complete projects, even and perhaps especially if part of the training they offer younger researchers means that their students tend to publish most of the results. Finally, an active online presence in your field established through sharing your research via blogs, professional platforms and social media might not garner the same respect as formal publications, but it can count for a great deal when many universities are working to increase their online activities and funding bodies working to democratise the publication of the research they support. Generally speaking, committees considering applications will be even more likely to google the names of candidates and applicants than to look up the metrics associated with them, so assume that both will be done and ensure that what can be found shares excellent research content and leaves a desirable professional impression of you and your work.

You might be interested in Services offered by Proof-Reading-Service.com

Journal editing.

Journal article editing services

PhD thesis editing services

Scientific Editing

Manuscript editing.

Manuscript editing services

Expert Editing

Expert editing for all papers

Research Editing

Research paper editing services

Professional book editing services

What Is a Good H-Index Required for an Academic Position? The h-index is used along with applicants research & skills to measure their impact

Related Posts

Choosing the Right Journal

Choosing the Right Journal

September 10, 2021

Example of a Quantitative Research Paper

Example of a Quantitative Research Paper

September 4, 2021

Acknowledgements Example for an Academic Research Paper

Acknowledgements Example for an Academic Research Paper

September 1, 2021

Free Sample Letters for Withdrawing a Manuscript

Free Sample Letters for Withdrawing a Manuscript

August 31, 2021

Our Recent Posts

Examples of Research Paper Topics in Different Study Areas

Our review ratings

  • Examples of Research Paper Topics in Different Study Areas Score: 98%
  • Dealing with Language Problems – Journal Editor’s Feedback Score: 95%
  • Making Good Use of a Professional Proofreader Score: 92%
  • How To Format Your Journal Paper Using Published Articles Score: 95%
  • Journal Rejection as Inspiration for a New Perspective Score: 95%

Explore our Categories

  • Abbreviation in Academic Writing (4)
  • Career Advice for Academics (5)
  • Dealing with Paper Rejection (11)
  • Grammar in Academic Writing (5)
  • Help with Peer Review (7)
  • How To Get Published (146)
  • Paper Writing Advice (17)
  • Referencing & Bibliographies (16)
  • Maps & Floorplans
  • Libraries A-Z

University of Missouri Libraries

  • Ellis Library (main)
  • Engineering Library
  • Geological Sciences
  • Journalism Library
  • Law Library
  • Mathematical Sciences
  • MU Digital Collections
  • Veterinary Medical
  • More Libraries...
  • Instructional Services
  • Course Reserves
  • Course Guides
  • Schedule a Library Class
  • Class Assessment Forms
  • Recordings & Tutorials
  • Research & Writing Help
  • More class resources
  • Places to Study
  • Borrow, Request & Renew
  • Call Numbers
  • Computers, Printers, Scanners & Software
  • Digital Media Lab
  • Equipment Lending: Laptops, cameras, etc.
  • Subject Librarians
  • Writing Tutors
  • More In the Library...
  • Undergraduate Students
  • Graduate Students
  • Faculty & Staff
  • Researcher Support
  • Distance Learners
  • International Students
  • More Services for...
  • View my MU Libraries Account (login & click on My Library Account)
  • View my MOBIUS Checkouts
  • Renew my Books (login & click on My Loans)
  • Place a Hold on a Book
  • Request Books from Depository
  • View my ILL@MU Account
  • Set Up Alerts in Databases
  • More Account Information...

Maximizing your research identity and impact

  • Researcher Profiles
  • h-index for resesarchers-definition

h-index for journals

H-index for institutions, computing your own h-index, ways to increase your h-index, limitations of the h-index, variations of the h-index.

  • Using Scopus to find a researcher's h-index
  • Additional resources for finding a researcher's h-index
  • Journal Impact Factor & other journal rankings
  • Altmetrics This link opens in a new window
  • Research Repositories
  • Open Access This link opens in a new window
  • Methods for increasing researcher impact & visibility

h-index for researchers-definition

  • The h-index is a measure used to indicate the impact and productivity of a researcher based on how often his/her publications have been cited.
  • The physicist, Jorge E. Hirsch, provides the following definition for the h-index:  A scientist has index h if  h of his/her N p  papers have at least h citations each, and the other (N p  − h) papers have no more than h citations each. (Hirsch, JE (15 November 2005) PNAS 102 (46) 16569-16572)
  • The h -index is based on the highest number of papers written by the author that have had at least the same number of citations.
  • A researcher with an h-index of 6 has published six papers that have been cited at least six times by other scholars.  This researcher may have published more than six papers, but only six of them have been cited six or more times. 

Whether or not a h-index is considered strong, weak or average depends on the researcher's field of study and how long they have been active.  The h-index of an individual should be considered in the context of the h-indices of equivalent researchers in the same field of study.

Definition :  The h-index of a publication is the largest number h such that at least h articles in that publication were cited at least h times each. For example, a journal with a h-index of 20 has published 20 articles that have been cited 20 or more times.

Available from:

  • SJR (Scimago Journal & Country Rank)

Whether or not a h-index is considered strong, weak or average depends on the discipline the journal covers and how long it has published. The h-index of a journal should be considered in the context of the h-indices of other journals in similar disciplines.

Definition :  The h-index of an institution is the largest number h such that at least h articles published by researchers at the institution were cited at least h times each. For example, if an institution has a h-index of 200 it's researchers have published 200 articles that have been cited 200 or more times.

Available from: exaly

In a spreadsheet, list the number of times each of your publications has been cited by other scholars. 

Sort the spreadsheet in descending order by the number of  times each publication is cited.  Then start counting down until the article number is equal to or not greater than the times cited.

Article                   Times Cited

1                              50          

2                              15          

3                              12

4                              10

5                              8

6                              7              == =>h index is 6

7                              5             

8                              1

How to successfully boost your h-index (enago academy, 2019)

Glänzel, Wolfgang On the Opportunities and Limitations of the H-index. , 2006

  • h -index based upon data from the last 5 years
  •  i-10 index is the number of articles by an author that have at least ten citations. 
  •  i-10 index was created by Google Scholar .
  • Used to compare researchers with different lengths of publication history
  • m-index =   ­­­­­­­­­­­­­­­­­­___________ h-index _______________                      # of years since author’s 1 st publication

Using Scopus to find an researcher's h-index

Additional resources for finding a researcher's h-index.

Web of Science Core Collection or Web of Science All Databases

  • Perform an author search
  • Create a citation report for that author.
  • The h-index will be listed in the report.

Set up your author profile in the following three resources.  Each resource will compute your h-index.  Your h-index may vary since each of these sites collects data from different resources.

  • Google Scholar Citations Computes h-index based on publications and cited references in Google Scholar .
  • Researcher ID
  • Computes h-index based on publications and cited references in the last 20 years of Web of Science .
  • << Previous: Researcher Profiles
  • Next: Journal Impact Factor & other journal rankings >>
  • Last Updated: Nov 15, 2023 11:59 AM
  • URL: https://libraryguides.missouri.edu/researchidentity

Facebook Like

Course blog for INFO 2040/CS 2850/Econ 2040/SOC 2090

The H-Index: good or bad?

Anyone working in academia is well-aware of the ubiquity of h-indexes. To many professors and graduate students, the h-index is perhaps the most widely used metric in determining the influence of one’s work. This single number is used to convey the influence you have had in your research career, is pivotal to career advancement, and used in part to determine the relative influence of difference academic institutions. Given the ubiquity and power of such an index on the academic sphere, we must pause for a second and ask, is this actually the best method for ranking the merit of different scientists? Have we perhaps learned better alternatives of ranking publications within our own course?

First off, I shall define the h-index:

The h-index of an author, h, is the largest number x such that there are x articles published by the author which have at least x references. In other words, h is the maximum number of publications by a scientist that were cited at least h times.

As can be seen, this metric (developed by Jorge Eduardo Hirsch of UCSD in 2005) is used to measure the quality and quantity of a researcher’s work. The inventor, Hirsch himself, proposes that after 20 years of research, an h-index of 20 is good, 40 is outstanding, and 60 is exceptional. It is an indicator that a researcher is reliable, consistently engaged in meaningful science and has publications that are largely adopted. However, time and time again, the h-index has proved ineffective to honour the importance of scientific endeavours.

First, consider the young and exceptional scientist. If in their short career, they have published 2 great papers, with thousands of citations, their h-index is just as good as another scientist who has worked for 20 years and published 2o papers, 2 of which each have 2 citations. Their is an implicit agism in the h-index that works against the interests of meritocracy.

Second, consider the scientist Y that is consistently published by the best journals. H-index does not discriminate between the authority of different hubs, and the achievement of being published in a great journal is treated equal to being published in the worst one. The h-index does not take into account the fact that some citations are more impressive than others, and more indicative of meaningful work. It is not fair to treat every referrer with the same sense of credibility.

Third, authors are encouraged by h-index to produce less important publications that would enhance their index, as the h-index is bounded by the minimum number of articles. For instance, I could compartmentalise my research into 4 different research papers for a better h-index, even though the ideas might be better expressed in a single research paper. This creates a culture in academia of prioritising quantity: publishing more papers to convey influence, instead of focusing on the quality and merit of the science itself.

I could not help but pause and think, have we learned a better model in our Networks class? Could we not provide a better score than the H-Index?

I came across a most though-provoking article in PLOS, a non-profit tech and medicine publisher that contains open-access journals:

The Pagerank-Index: Going beyond Citation Counts in Quantifying Scientific Impact of Researchers

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.013479

They propose using the page-rank algorithm (as discussed in class) to rank publications in the citation network. Each node gets a value after this process, which can then be distributed to each author, and the summation of all page-rank values is obtained for every author. This can then be compared to all other author values to form the percentile.

The advantage of doing so, is that PageRank can compare the sources of information and determine which references are more-trustworthy. As discussed in lectures, PageRank is calculated recursively and depends on the metric of all pages that link to it. Each page spreads it vote equally among all out-links. If a page is linked to by many high ranked pages, it achieves a high rank.

Here, not all citations are equal, and a publications is important if it is pointed to by other important publications. This is the beauty of PageRank, an elegant solution which we have covered in our course.

In this case, we make the scientific world more meritocratic. We give the potential to young authors to be taken seriously, if they have already produced valuable works. Further, we give credence to researchers that are being published in amazing scientific journals over mediocre ones. We could also implement a variance of HITS to achieve similar outcomes, and there are a myriad of strategies we have learned in class that could create a more fair academic environment.

In conclusion, the H-index should be forgotten! Let the academic world move forward, and benefit from the might of Networks and Google’s innovation. After all, Google Scholar is one of the most ubiquitous users of the h-index, and the company itself could lead the way by reverting back to their own early innovations! Let us use the PageRank algorithm to evaluate scientific research in a fair manner!

November 13, 2020 | category: Uncategorized

Leave a Comment

Leave a Reply

Name (required)

Mail (will not be published) (required)

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Blogging Calendar

©2024 Cornell University Powered by Edublogs Campus and running on blogs.cornell.edu

Banner

Measuring academic impact: An author-level metric: the H-index

  • Tracking your research impact: sources & metrics
  • An author-level metric: the H-index
  • Journal level metrics

An author-level metric: the h-index

The H-index, proposed by physicist J.E. Hirsch (hence the H) in 2005, is a way to measure the individual academic output of a researcher. "A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np-h) papers have ≤h citations each" (Hirsch, 2005).

To determine the H-index of a researcher you need the list of the publications of that researcher and the number of citations each publication received. Then sort the publication list by the number of citations: the publication with the highest number of citations is number 1. The H-index is the number where the number of the publication in the list and the number of citations received are the same.

Example H-graph in Scopus

Example of an H-graph in Scopus (click on the picture to enlarge it)

The H-index has become a popular performance indicator, probably because the calculation of the h-index is easy to understand, and because it takes into account both productivity (the number of publications) and academic impact (the number of citations received). The H-index of an author is also easy to find: in Scopus, Web of Science and Google Scholar within a few mouseclicks (but be aware, that's the 'quick and dirty approach').

However, the H-index has various weak points. An easy example: the H-index doesn’t take into account the age of the researcher, which makes it unfair to use the H-index to compare two researchers in different stages of their research career. A professor at the end of his career has published more than a PhD-student and has had more time to get cited. There are also examples of researchers changing disciplines, taking their high H-index with them to a discipline with in general lower H-indexes. Because of differences in publication cultures between disciplines , you can’t compare the H-index of researchers from different disciplines.

Examples of researchers changing disciplines

From astrophysics to economics: Titus Galama . His Google Scholar Profile shows his H-index based on all his publications.

From psychology to media & communication: Jeroen Jansz . His Google Scholar Profile also shows his H-index based on all his publications.

Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102 (46), 16569-16572. https://doi.org/10.1073/pnas.0507655102

Which H-index?

To create the list of publications and the number of citations received you can use different sources. The most used sources are Scopus, Web of Science and Google Scholar. Which source you use, determines the H-index you find for a particular researcher, because the ‘publication universes’ of the sources are different. For example Google Scholar also indexes university repositories, so citations in student’s theses are counted as well.

The example below shows the general picture: the H-index based on publications in Web of Science is lower than the H-index based on publications in Scopus; the H-index in Google Scholar is by far the highest.

An example: Rutger Engels, Professor Developmental Psychopathology at ESSB and former rector magnificus of the Erasmus University Rotterdam. 

* In Web of Science we had to combine and clean several author records. In Scopus the Scopus Author Profile was used, in Google Scholar the Google Scholar Citation Profile was used. 

Our advice: When you are looking for your own H-index or the H-index of another researcher, don’t rely on easily-available numbers. The basis of the H-index is the publications list: you have to check whether the publications in the list you use are indeed of the researcher and whether publications might be missing. Always give information about the source used and use, when possible, multiple sources.

Determining the H-index in Web of Science, Scopus and Google Scholar

In these handouts you can find the steps to determine the H-index of a researcher in:

  • The H-index in Google Scholar
  • The H-index in Scopus
  • The H-index in Web of Science
  • YOU in databases: Academic profiling How can you make sure that your publications can be found easily in citation databases such as Scopus, Web of Science and Google Scholar? By managing your author profiles, such as Web of Science ResearcherID, ORCID iD, Google Scholar Citations profile or Scopus Author ID.
  • EUR ORCID LibGuide Practical information on how to register for an ORCID iD, how to add publication data and other information to your ORCID record and how to use your ORCID iD in other systems.
  • Research Evaluation and Assessment Service (REAS) A team of bibliometric practitioners of the University Library can give advice on how to evaluate academic impact and to create understanding of responsible metrics. Click on the header 'Contact the REAS team'.

h index of phd student

Email the Information skills team

  • << Previous: Tracking your research impact: sources & metrics
  • Next: Journal level metrics >>
  • Last Updated: Mar 27, 2024 6:24 PM
  • URL: https://libguides.eur.nl/informationskillsimpact

Boston College Libraries homepage

  • Research guides

Assessing Article and Author Influence

Finding an author's h-index, the h-index: a brief guide.

This page provides an overview of the H-Index, an attempt to measure the research impact of a scholar. The topics include:

What is the H-Index?

How is the h-index computed, factors to bear in mind.

  • Using Harzing's Publish or Publish to Assess the H-Index

Using Web of Science to Assess the H-Index

  • H-Index Video

Contemporary H-Index

Selected further reading.

H-Index chart

The h-index, created by Jorge E. Hirsch in 2005, is an attempt to measure the research impact of a scholar. In his 2005 article Hirsch put forward "an easily computable index, h, which gives an estimate of the importance, significance, and broad impact of a scientist's cumulative research contributions." He believed "that this index may provide a useful yardstick with which to compare, in an unbiased way, different individuals competing for the same resource when an important evaluation criterion is scientific achievement." There has been much controversy over the value of the h-index, in particular whether its merits outweigh its weaknesses. There has also been much debate concerning the optimal methodology to use in assessing the index.  In locating someone's h-index a number of methodologies/databases may be used. Two major ones are ISI's Web of Science and the free Harzing's Publish or Perish which uses Google Scholar data.

An h-index of 20 signifies that a scientist has published 20 articles each of which has been cited at least 20 times. Sometimes the h=index is, arguably, misleading. For example, if a scholar's works have received, say, 10,000 citations he may still have a h-index of only 12 as only 12 of his papers have been cited at least 12 times. This can happen when one of his papers has been cited thousands and thousands of times. So, to have a high h-index one must have published a large number of papers. There have been instances of Nobel Prize winners in scientific fields who have a relatively low h-index. This is due to them having published one or a very small number of extremely influential papers and maybe numerous other papers that were not so important and, consequently, not well cited.

  • As citation practices/patterns can vary quite widely across disciplines, it is not advisable to use h-index scores to assess the research impact of personnel in different disciplines.
  • The h-index is not very widely used in the Arts and Humanities.
  • H-index scores can vary widely depending on the methodology/database used. This is because different methodologies draw upon different citation data. When comparing different people’s H-Index it’s essential to use the same methodology. The h-index does not distinguish the relative contributions of authors in multi-author articles.
  • The h-index may vary significantly depending on how long the scholar has been publishing and on the number of articles they’ve published. Older, more prolific scholars will tend to have a higher h-index than younger, less productive ones.
  • The h-index can never decrease. This, at times, can be a problem as it does not indicate the decreasing productivity and influence of a scholar.

Using Harzing's Publish or Publish to Assess the H-Index

Publish or Perish utilizes data from Google Scholar. Its software may be downloaded from the Publish or Perish website . A person's h-index located through Publish or Perish is often higher than the same person's index located by means of ISI's Web of Science . This is primarily because the Google Scholar data utilized by Publish or Perish includes a much wider range of sources, e.g. working papers, conference papers, technical reports etc., than does Web of Science . It has often been observed that Web of Science may sometimes produce a more authoritative h-index than Publish or Perish. This tends to be more likely in certain disciplines in the Arts, Humanities and Social Sciences.

After you've launched the application, click on "Author impact" on top. Enter the author's name as initial and surname enclosed with quotation marks, e.g. "S Helluy". Then click "Lookup" (top right). You'll see a screen with a listing of S. Helluy's works arranged by number of citations. Above this listing is a smaller panel where one may see the h-index score of 17:

H-index of 17

Publish or Perish uses Google Scholar data and these data occasionally split a single paper into multiple entries. This is usually due to incorrect or sloppy referencing of a paper by others, which causes Google Scholar to believe that the referenced works are different. However, you can merge duplicate records in the Publish or Perish results list. You do this by dragging one item and dropping it onto another; the resulting item has a small "double document" icon as illustrated below:

merged row indication in interface

  • Alan Marnett (2010). "H-Index: What It Is and How to Find Yours"
  • Harzing, Anne-Wil (2008) Reflections on the H-Index .
  • Hirsch, J. E. (15 November 2005). "An index to quantify an individual's scientific research output" . PNAS 102 (46): 16569–16572.
  • A. M. Petersen, H. E. Stanley, and S. Succi (2011). "Statistical Regularities in the Rank-Citation Profile of Scientists" Nature Scientific Reports 181 : 1–7.
  • Williams, Antony (2011). Calculating my H Index With Free Available Tools .

If you are using Clarivate's Web of Science database to assess a h-index, it is important to remember that Web of Science uses only those citations in the journals listed in Web of Science . However, a scholar’s work may be published in journals not covered by Web of Science . It is not possible to add these to the database’s citation report and go towards the h-index. Also, Web of Science only includes citations to journal articles – no books, chapters, working papers etc.). Moreover, Web of Science ’s coverage of journals in the Social Sciences and the Humanities is relatively sparse. This is especially so for the Humanities.

Select the option "Cited Reference Search" (on top). Enter the person’s last name and first initial followed by an asterisk, e.g. Helluy S* If the person always uses a second first name include the second initial followed by an asterisk, e.g. Franklin KT* .

screen shot

If other authors have the same name, it’s important that you omit their articles. You can use the check boxes to the left of each article to remove individual items that are not by the author you are searching. The “Refine Results” column on the left can also help by limiting to relevant “Organizations – Enhanced”, by “Research Areas”, by “Publication Years”.

When you've determined that all the articles in the list are by the author, S. Helluy , you're searching for click on “Create Citation Report” on the right. The h-index for S. Helluy will be displayed as well as other citation stats.

H-index for S. Helluy

Notice the two bar charts that graph the number of items published each year and the number of citations received each year.

bar charts of published items

If you wish to see how the person's h-index has changed over a time period you can use the drop-down menus below to specify a range of years. Web of Science will then re-calculate the h-index using only those articles added for those particular years.

h-index across selected years

Contending that Hirsch's H-Index does not take into account the "age" of an article, Sidiropoulos et al. (2006) came up with a modification, i.e. the Contemporary H-Index . They argued that though some older scholars may have have been "inactive" for a long period their h-index may still be high since the h-index cannot decline. This may be considered as somewhat unfair to older, senior scholars who continue to produce (if one has published a lot and already has a high h-index it is more and more difficult to incease the index). It may also be seen as unfair to younger brilliant scholars who have had time only to publish a small number of significant articles and consequently have only a low h-index. Hirsch's h-index, it is argued, doesn't distinguish between the different productivity/citations of these different kinds of scholars. The solution of Sidiropoulos et al.  is to give weightings to articles according to the year in which they're published. For example, "for an article published during the current year, its citations account four times. For an article published 4 year ago, its citations account only one time. For an article published 6 year ago, its citations account 4/6 times, and so on. This way, an old article gradually loses its 'value', even if it still gets citations." Thus, more emphasis is given to recent articles thereby favoring the h-index of scholars who are actively publishing.

One of the easiest ways to obtain someone's contemporary h-index, or "hc-index", is to use Harzing's Publish or Perish software.

Publish or Perish interface

  • << Previous: AltMetrics
  • Next: "Times Cited" >>
  • Last Updated: Mar 20, 2024 11:33 AM
  • Subjects: General
  • Tags: altmetrics , author ID , h-index , impact factor

Reference management. Clean and simple.

How to find your h-index on Google Scholar

h-index illustration for Google Scholar

How to calculate your h-index using Google Scholar

The name says it all: get more insights using harzing's "publish or perish", can you trust the h-index calculated with google scholar, frequently asked questions about finding your h-index on google scholar, related articles.

Google Scholar is a search engine with a special focus on academic papers and patents. It's limited in functionality compared to the major academic databases Scopus and Web of Science , but it is free, and you will easily know your way around because it is like doing a search on Google.

While Scopus and Web of Science limit their analyses to published journal articles, conference proceedings, and books, Google Scholar uses the entire internet as its source of data. As a result, the h-index reported by Google Scholar tends to be higher than the one found in the other databases.

➡️  What is the h-index?

Google Scholar can automatically calculate your h-index; you just need to set up a profile first. By default, Google Scholar profiles are public - allowing others to find you and see your publications and h-index. However, if you don't want to have such a public web presence, you can un-tick the "make my profile public" box on the final page of setting up your profile.

Once you have set up your profile, the h-index will be displayed in the upper right corner. Besides the classic h-index, Google also reports an i10-index along with the h-index. The i10-index is a simple measurement that shows how many of the author's papers have 10 or more citations.

Google Scholar h index for Stephen Hawking

Google Scholar also has a special author search , where you can look up the author profiles of others. It will, however, only show results for scholars with public profiles, as well as those of historical scientists like Albert Einstein .

Google Scholar's extensive database might list publications that most academics would not include in an h-index analysis. So it might be useful to vet the papers before calculating the h-index. Scopus and Web of Science offer such functionality to some extent, but for Google Scholar it's not possible to do right in your browser. However, there is a free desktop application called Publish or Perish , that allows you to just do that. It's available on Windows, and with some effort, you can also run it on macOS and Linux.

In order to check an author's h-index with Publish or Perish go to "Query > New Google Scholar Profile Query". Enter the scholar's name in the search box and click lookup. A window will open with potential matches. After selecting a scholar, the program will query Google Scholar for citation data and populate a list of papers, and present summary statistics on the right of this list. The list is particularly helpful because it can be used to exclude false positives.

Publish or Perish: h-index for Stephen Hawking

In addition to the standard h-index, Publish or Perish, also calculates Egghe's g-index , along with normalized and annual individual h-indexes. You can read more about how these are calculated in the Publish or Perish manual .

As illustrated in Stephen Hawking's Google Scholar h-index and also noted by others , the h-index in Google Scholar tends to be higher than in Scopus or Web of Science. The main reason for this discrepancy is mainly attributed to the use of different data sources.

While Google Scholar grabs citation information from all over the internet, Scopus and Web of Science restrict their data sources to classic academic sources. Each approach is valid on its own. One could say that Google Scholar's h-index is more up-to-date as it also includes "early citations" from pre-prints before the article is actually published in an academic journal.

Also with the rise of "altmetrics", there is generally a trend to measure the resonance of academic papers outside the strict academic world. However, since Google Scholar's approach is fully automatic and not subject to any review, it can also be manipulated rather easily .

For example, you could upload false scholarly papers that give unsupported citation credit, or add papers to the Google Scholar profile that were not even authored by the person in question. Yes, there is room for improvement, but Google Scholar's h-index is a great free alternative to subscription-based databases.

You can learn how to calculate your h-index using Scopus and Web of Science below:

➡️  How to use Scopus to calculate your h-index

➡️  How to use Web of Science to calculate your h-index

An h-index is a rough summary measure of a researcher’s productivity and impact . Productivity is quantified by the number of papers, and impact by the number of citations the researchers' publications have received.

Even though Scopus needs to crunch millions of citations to find the h-index, the look-up is pretty fast. Read our guide How to calculate your h-index using Scopus for further instructions.

Web of Science is a database that has compiled millions of articles and citations. This data can be used to calculate all sorts of bibliographic metrics including an h-index. Read our guide How to use Web of Science to calculate your h-index for further instructions.

The h-index is not something that needs to be calculated on a daily basis, but it's good to know where you are for several reasons. First, climbing the h-index ladder is something worth celebrating. But more importantly, the h-index is one of the measures funding agencies or the university's hiring committee calculate when you apply for a grant or a position. Given the often huge number of applications, the h-index is calculated in order to rank candidates and apply a pre-filter.

An h-index is calculated as the number of papers with a citation number ≥ h. An h-index of 3 hence means that the author has published at least three articles, of which each has been cited at least three times.

Is Google Scholar a database or search engine

Measuring your research impact: H-Index

Getting Started

Journal Citation Reports (JCR)

Eigenfactor and Article Influence

Scimago Journal and Country Rank

Google Scholar Metrics

Web of Science Citation Tools

Google Scholar Citations

PLoS Article-Level Metrics

Publish or Perish

  • Author disambiguation
  • Broadening your impact

Table of Contents

Author Impact

Journal Impact

Tracking and Measuring Your Impact

Author Disambiguation

Broadening Your Impact

Other H-Index Resources

  • An index to quantify an individual's scientific research output This is the original paper by J.E. Hirsch proposing and describing the H-index.

H-Index in Web of Science

The Web of Science uses the H-Index to quantify research output by measuring author productivity and impact.

H-Index = number of papers ( h ) with a citation number ≥ h .  

Example: a scientist with an H-Index of 37 has 37 papers cited at least 37 times.  

Advantages of the H-Index:

  • Allows for direct comparisons within disciplines
  • Measures quantity and impact by a single value.

Disadvantages of the H-Index:

  • Does not give an accurate measure for early-career researchers
  • Calculated by using only articles that are indexed in Web of Science.  If a researcher publishes an article in a journal that is not indexed by Web of Science, the article as well as any citations to it will not be included in the H-Index calculation.

Tools for measuring H-Index:

  • Web of Science
  • Google Scholar

This short clip helps to explain the limitations of the H-Index for early-career scientists:

  • << Previous: Author Impact
  • Next: G-Index >>
  • Last Updated: Dec 7, 2022 1:18 PM
  • URL: https://guides.library.cornell.edu/impact

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections
  • About This Blog
  • Official PLOS Blog
  • EveryONE Blog
  • Speaking of Medicine
  • PLOS Biologue
  • Absolutely Maybe
  • DNA Science
  • PLOS ECR Community
  • All Models Are Wrong
  • About PLOS Blogs

Why I love the H-index

Featured image

The H-index – a small number with a big impact.  First introduced by Jorge E. Hirsh in 2005, it is a relatively simple way to calculate and measure the impact of a scientist (Hirsch, 2005). It divides opinion.  You either love it or hate it. I happen to think the H-index is a superb tool to help assess scientific impact.  Of course, people are always favourable towards metrics that make them look good.  So let’s get this out into the open now, my H-index is 44 (I have 44 papers with at least 44 citations) and, yes, I’m proud of it! But my love of the H-index stems from a much deeper obsession with citations.

As an impressionable young graduate student, I saw my PhD supervisor regularly check his citations.  Citations to papers means that someone used your work or thought it was relevant to mention in the context of their own work.  If a paper was never cited, and perhaps therefore also little read, was it worth doing the research in the first place? I still remember the excitement of the first citation I ever received and I still enjoy seeing new citations roll in.

The H index: what does it mean, how is it calculated and used?

The H-index measures the maximum number of papers N you have, all of which have at least N citations. So if you have 3 papers with at least 3 citations, but you don’t have 4 papers with at least 4 citations then your H-index is 3. Obviously, the H-index can only increase if you keep publishing papers and they are cited.  But the higher your H-index gets, the harder it is to increase it.

One of the ways in which I use the H-index is when making tenure recommendations. By placing the candidate within the context of the H-indices of their departmental peers, I can judge the scientific output of the candidate within the context of the host institution. This is a useful because it can be difficult to understand what is required at different host institutions from around the world.  It would be negligent to only look at H-index and so I use a range of other metrics as well,  together with good old fashioned scientific judgement of their contributions from reading their application and papers.

The m value

One of those extra metrics I use was also introduced by Hirsch, and is called m (Hirsch, 2005). M measures the slope or rate of increase of the H-index over time and is, in my view, a greatly underappreciated measure.   To calculate the m -value, take the researchers H-index and divide by the number of years since their first publication. This measure helps to normalise between those at the early or twilight stages of their career. As Hirsch did for physicists in the field of computational biology, I broadly categorise people according to their m value in the table below.  The boundaries correspond exactly to those used by Hirsch.

So post-docs with an m -value of greater than three are future science superstars and highly likely to have a stratospheric rise. If you can find one, hire them immediately!

The H-trajectory

The graph below shows the growth of the H-index for three scientists  – A, B and C – who respectively have an H-index of 12, 15 and 16.  I call these curves a researcher’s H-trajectory.

h index of phd student

If we calculate their m -value, then we find that A has a value of 0.5, B has 0.94 and C a value of 1.67. So while each of these researchers has a similar H-index, their likelihood for future growth can be predicted based on past performance. Recently, Daniel Acuna and colleagues presented a sophisticated prediction of future H-index using a number of several features, such as number of publications and the number in top journals (Acuna et al . 2012).

As any serious citation gazer knows, the H-index has numerous potential problems. For example, researcher A who spent time in industry has fewer publications, people with names in non English alphabets or very common names can be difficult to correctly calculate, different fields have widely differing authorship, publication and citation patterns. But even considering all these problems, I believe the H-index is here to stay.  My experience is that ranking scientists by H-index and m-value correlates very well with my own personal judgements about the impact of scientists that I know and indeed with the positions that those scientists hold in Universities around the world.

Alex Bateman is currently a computational biologist at the Wellcome Trust Sanger Institute where he has led the Pfam database project. On Novembert 1st, he takes up a new role as  Head of Protein Sequence Resources at the EMBL-European Bioinformatics Institute (EMBL-EBI).

J.E. Hirsch. An index to quantify an individual’s scientific research output. Proc. Natl. Acad. Sci. 102, 16569-16572.

D.E. Acuna, S. Allesina & P. Konrad. Predicting scientific success. Nature 489, 201-202.

The problem with the H factor is that it is, to a considerable extent, a measure of how old you are. The m index is supposed to correct this but it can distort things by assuming a linearity that just isn’t there in the development of a scientist.

The alternative I propose is the H5Y factor. It is the H factor, but calculated only on citations received in the past five years. This equalizes the playing field and my guess is that it is a much better predictor of performance for the next five years than H or m. Who cares what you have published thirty years ago? (unless it is still being cited, of course!)

I agree with Constantin, and would add that the m-index is particularly unfair to those who take early career breaks, since it takes several years before the penalty of having a gap after their first few papers starts to become trivially small.

Interestingly, Google Scholar’s “My Citations” pages (e.g. see my profile link for a not-so-random example!) does calculate what Constantin proposes, an H-index computed from the last five years’ citations, though they don’t call it H5Y. Personally, though I agree that this is better than H, I still think it’s a rather biased measure of quality, which more strongly reflects quantity or length of active career.

Funnily enough, I think Google are already using a better measure (which they call H5), but only for their journal rankings, not their author profiles, see e.g. scholar.google.co.uk/citations?view_op=top_venues This measure is the H-index for work published in the last five years, rather than just cited in the last five years, and they call this H5.

I think it would be great if Google Citations profiles showed H5 for authors, but frustratingly, Google’s FAQ indicates that they are opposed to adding new metrics: http://scholar.google.com/intl/en/scholar/citations.html#citations But perhaps Scopus, ResearcherID, Academia.edu, ResearchGate or similar will add H5 in the future…

[…] Crushing it Among Nobel Science Winners Why I love the H-index (not sure I’m afraid or in agreement) A Simple Way to Reduce the Excess of Antibiotics […]

Most of the H-trajectory plots that I have created for active scientists do show quite a linear trend. I only showed three in my graph above, but researcher A was the only significant deviation that I found. Creating these H-trajectory plots was not as easy as I thought it was going to be. Downloading the full citation data is time consuming given the limits imposed by SCOPUS and ISI. I also found that the underlying data for citations was not nearly as clean as I expected.

I agree that it is important to be able to take account of career breaks so that we do not penalise researchers unfairly. Being able to plot the H-trajectory might help spot these. But as I mentioned in the article these metrics should only be used as part of a wider evaluation of individuals outputs. I tend to agree with the google view on the proliferation of metrics that this could lead to more confusion than it solves. But H5-like measures seem like another reasonable way to normalise out the length of career issue.

I find Google Scholar far better than ISI. It is updated more regularly and gives better representation to publications in non-English journals. I would choose it over others to calculate any sort of index.

Ok, but why would the h-index given by an online calculator ever be higher than the number of publications?

I agree with Alex. I had the same experience, whether to recruit post-docs, young group leaders or evaluate tenure (and even in one case head of large institute). After 3, 10, 20 or 30 years of research, the h-index and m numbers are very good to evaluate not only the brilliance at one point, but also the steady success. You do not hire the genious who had only one magic paper and nothing else significant. The likelyhood that the magic happens again is very low. You have to compare with peers though. Having been an experimental neuroscientist and a computational modeller I know that the citation patterns are quite different. However, when using the H-index to compare people, we are generally in a situation where we compare similar scientists.

All that of course being a way to quickly sort out A, B or C lists, and uncovering potential problems (100 publications and h-index of 10). After that step, you need to evaluate the candidates more attentively, using interviews etc. But interestingly you very rarely read the publications. In the first screen you have too many of them and in the second you do not need them anymore.

(and I am “excellent” yeah! Not “stellar” though. One delusion I have to get rid of 😉 )

Interesting logic exercises. What about superstars who translate their work into patents/products and can not publish due company confidentiality, company goals, etc? Patents are not cited anywhere close to publications. Organic journals often have low impact factors and low citation rates as the animal studies in higher impact journals always overshadow the original synthetic papers. A good friend of mine has an H-index of only 7 but has designed a block-buster drug (and several other promising leads)–I would trade my inflated H-Index (product of a hot, speculative field) in a minute to have his stock options–oh and that drug that helps tens of thousands everyday. Sorry to burst that bubble–H-Indexers. Used to be a believer but I have now seen the light. Yep–and I would also take that “flash in the pan” invention of PCR (and the Nobel) over 50 years of high citations. One flash can have a greater impact than a thousand scientists over a thousand years.

[…] is reading about the H-index from someone with a big one. Does side matter? When it comes to how many times your works get cited it […]

Thanks for the helpful discussion. I just googled “what is a good h-index” and yours was the first thing to come up. I think as with any single statistic it has limitations, but overall is a decent reflection of output, especially for comparison with similar applicants for a position.

I’d convert your m index based on h-index/(fte years working since first paper) which would take account of breaks/part-time working. This would particularly help women remain competitive in the context of extended periods of part-time working. So my 10 years since 1st paper would turn into 10-1-(5*0.6) = 6, so my m index is 2 = 12/6 instead of 12/10. Woo!

Anyway I don’t think anything will stop me checking my citations obsessively and google citations is the easiest place I’ve found to keep my publications organised.

Amen. Coming from someone who was stuck too long in a company in which publishing in the open domain was a big no-no. H-index is one number, but it is not _the_ number. Neither are Google’s variations, and so forth. For example, IQ is another number, it has it uses, but it clearly isn’t _the_ number either. Me not like metrics so much.

I like the m-value, but it has the unfortunate effect of penalising the early starter. For example, someone who publishes a paper from their Hons thesis may be penalised by 3-4 years in the denominator producing their m-value when compared with someone starting publishing in the third or fourth year of their PhD. So I would take Kate’s idea further, and use FTE as THE denominator when calculating m, instead of years since first paper. This could include time spent as a PhD student, or not, as long as it was standardised.

Google Scholar gives exactly this statistic under the standard h-value.

One problem with H or M index can be how many people are actively involved in the research in a particular field. For example, there are only ~186 laboratories in the whole world working on my previous field, Candida albicans. But, right now I am working on cancer biology. Huge number of people are working in that field, hence the h index will increase dramatically.

I think that is a good suggestion. It is important to take account of career breaks when judging peoples scientific output. Its not perfect to just subtract the break length or some combination of time. Even during a career break your pre-break papers will still be cited and potentially increasing your H-index. But to a first approximation what you suggest makes good sense. It would be interesting to look at the H-trajectories of people who have taken a career break to see how it affects growth of H-index.

Yes that is a good point. Publishing a paper during your degree should be seen as a strong positive indicator in my opinion and as you say not penalise the person. OK, so lets use years of FTE employment as the denominator.

It is best to only using H-index for comparing people within the same field. I’m not sure that moving field is any guarantee of increasing H-index, but it will be easier for your H-index to grow in the large field. I guess the smart thing to do is to start in cancer biology then move to the specialist field 😉

I don’t like h index when it is used to rank journals as it basically gives a statistics about the best papers in that journal. For example if nature has a 5 year h index of 300, it only says something about those 300 papers and nothing about the thousands of other papers they published. Because of that, plos one has a very high h index, I think ranked top 30, but that just reflects the number of elite papers being published there, not the tens of thousands of junk papers it publishes..

My problem with h-index for insividuals is that it does not differentiate first author papers from contributing author papers. A tech could be put in 50 high impact papers over 5-6 years because he is in a super high impact lab for technical contributions. However a post doc in such a lab would have much fewer papers because be would be focusing on making first author papers. However in the end, the tech would have a higher h index. Is that a fair assessment? Also that tech could be a postdoc in name but doing tech work. Would such a technician postdoc be at a higher advantage to employers who only look at h index?

The extreme scenarios given to discount H-index can be absurd: a) A tech having 50 papers! I do not know a tech that is put on 50 papers in 5 years in any lab. If such a tech exists, then he/she is a superstar tech and needs to be celebrated. b) why penalize someone who publishes in their PhD with an m index- well you forget that if someone publishes early, then their h index will increase because their papers will start collecting citations early so even if the denominator is increased by a few years, isn’t the numerator also increased? c) In chemistry, there was a table of the top 500, based on h-index. All on that table were superstars, by other metrics, and all the recent nobel prize winners were on that list. There was not a single name on that list who was not famous. I agree that one can not use h index to different between h of say 15 and 20. But if someone has an h of say 60 and the other has 30, there is usually a light and day between them. The h is here to stay.

None of the above addresses key weaknesses of the h-index – self citation and citation rings.

If you work in large collaborations and projects it is *easy* for *many* people to rack up large numbers of citations (and h-indices) by citing each others papers and by simply appearing on lots of papers for which they have done little work. At the very least I believe citations should be a conserved quantity – one citation is one citation, and if it to is a paper with 100 authors then it should not add 1 citation to *each* of those author’s records, it should add 0.01 (or some other agreed fraction dictated by author order such that Sigma (fraction) =1).

Then, self-citations, both in the form of you citing your own paper, or any papers upon which a co-author appears citing that paper should not count.

This would cut many h-indices down to size and be a much truer reflection of an individual’s contribution.

What’s your normalised (by number of authors) h-index, excluding self-citations?

[…] https://blogs.plos.org/biologue/2012/10/19/why-i-love-the-h-index/ Tweet!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); […]

Out lab publishes over 40 papers a year and has three techs contributing to almost every paper for technical work. They have higher h then postdocs.

But what is worse are PIs who do no work and don’t even read the paper but is still on the author list…apparently a common occurrence for high energy physics consortiums.

[…] h-index weaknesses with various computational models that, for example, reward highly-cited papers, correct for career length, rank authors’ papers against other papers published in the same year and source, or count just […]

There should be a metrics which weigh the author position. Typically first author does all the work. So the first authorship and the final authorship should have higher weightage compared to other authorships. Personally, I think the name appearing after the third author and before the final author should not have any weightage

[…] Many have attempted to fix the h-index weaknesses with various computational models that, for example, reward highly-cited papers, correct for career length, […]

[…] was re-reading a blog post by Alex Bateman on his affection for the H-index as a tool for evaluating up-and-coming scientists. He describes Jorge Hirsch’s H-index, its […]

1. Take out self citations 2. Take out review article citations 3. Have a negative field (topic) correction factor. 4. Have a negative correction factor for study section member, journal editor etc. 5. have a name and country correction factor.

Then let us compete…

I strongly agree about taking out citations for review articles. They totally distort the evaluation of a scientists’ worth. Reviews are cited much more highly than oritinal articles and contribute zero to the advancment of science by their authors. Less strongly, I also agree about self-citations because it is so hard to distinguish between genuine ones and irrelevant self-serving ones.

As both journal editor and study section member, I can assure you that neither capacity does anything to citations. I can’t think of anyone gratuitously citing my papers so that they can get preferential treatment. This is preposterous.

Finally, topic and country corrections are much more meaningful to apply to the final use of the h-factor than to its calculation. Whether that use is promotion, tenure, new appointment or funding, you are competing against your compatriot colleagues in the same field. Across countries, most responsible decision makers will apply a correction factor. When looking at post-doc applicants, I will rather take someone from, e.g. India with h=3 than from the US with h=5.

I do not quite agree with a country correction. Nobody cites you because you hail from a particular country. Your work is cited because its good, relevant and has helped somebody in their research and not just because you are a compatriot.

There is a bit more subtlety to country considerations. I agree that an author’s self-interest would strongly motivate them to cite the best paper that supports the point they are making, regardless of where the paper came from. However, we are judging the author here, not the paper. Across countries, there are huge discrepancies in terms of opportunities authors had to reach any given H value.

Put yourself in the position of a lab director going through post-doc applications. You narrow it down to two applicants, both H=5, one of them achieved it by working in the US, the other by working in, say, Tunisia. All other things being equal, who is more likely to have stellar performance in your lab?

This is what I mean when I say that the country correction should apply to the final use of the H, but not to its calculation.

If it comes down to selecting one among two-three people, the selection will fully be based upon how well the interview go with that person. Whether he will perform well or not will be gauged through the interview and not by the H index

[…] Source: blogs.plos.org […]

[…] but she send me a few links later on and while reading them through I slowly began to understand. This one she send me as well and helped me the most in understanding the whole […]

[…] A career in science depends more and more on quantitative measures that aim to evaluate the efficiency of work and the future potential of a researcher. Most of these measures depend on publishing output, thus many people conclude that we live in a “publish or perish” environment. Nowadays universities also face austerity measures and one could say, that we are living in “post-doc-apocalyptic” times, meaning that a large number of postdocs (working under temporary contracts) are competing for a small number of tenure positions. Altogether, universities make a very competitive environment. Quantitative indicators, like the h-index, are becoming more and more important in this competition. Should you take notice of them? If you want to work in academia, you should. You will probably see a lot of disadvantages in this output-orientated system, but this is where we are at today. Taking care of your career, might require strategic decision making, which has to take into account possibilities of improving your quantitative indicators also. What is the h-index? Despite the fact that its relatively new (it was described for the first time in 2005), the h-index has become an important measure of career development. Just today I saw an academic job offer with a minimum h-index value added to the list of requirements. The h-index is generally used for choosing candidates for promotions and grant fundings. It is very often used as an official criteria, but in other cases it can be used by referees or reviewers to evaluate research output, because it is easy for anyone to determine what is the exact value of this parameter (have a look here for a reasons to love h-index). […]

[…] h-index changes over time (though see Alex Bateman’s old blog post about “Why I love the h-index“, where he refers to the “h-trajectory”).  Does the “successful […]

[…] A. Why I love the h-index. PLoS […]

[…] on statistically assessing researchers based on publications, impact factors and H-index (Click for article on H-Index). Two people considered amongst the best scientists of the 20th century had this to […]

[…] https://blogs.plos.org/biologue/2012/10/19/why-i-love-the-h-index/ […]

h-index doesn’t make any sense. Although there are so many examples for that, few examples are given here. There is one scientist with h-index of 82 and 37,900 citations even at the beginning of her career. But she hasn’t written a single research paper in her life. Because she got the membership of many CERN groups in the world, she got her name among the thousand names given at the end of many research papers. She is not even a coauthor of any of those papers. Just because CERN members publish papers in so called US or European journals, google scholar search robot finds all those papers. Most of the CERN members got higher h-index and higher number of citations like that. On the other hand, there are some researchers who have written 70 to 100 papers. But they have a lower h-index below 10 and less number of citations, just because google search robot can’t find even many good journals. Google search robot easily finds US journals, because it thinks that US journals are reputed journals. When I was doing my Ph. D at early nineties, I read several research papers. I found one good paper with original data of ferrite thin films published by some Japanese scientists on a Japanese journal. Few years after that, I found that some US researchers have deposited the same material on the same substrate using same techniques. But the data of US researchers are worse than the data published by Japanese researchers. But US researchers have published their worse data even after one year in US journal of applied Physics. So how can someone say that US journals are the best?

[…] weaknesses with various computational models that, for example, reward highly-cited papers, correct for career length, rank authors’ papers against other papers published in the same year and source, or count just […]

[…] PLOS Biologue: Why I love the H-index – https://blogs.plos.org/biologue/2012/10/19/why-i-love-the-h-index/ […]

[…] Corregir según la duración de la carrera académica del autor […]

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name and email for the next time I comment.

Do researchers know what the h-index is? And how do they estimate its importance?

  • Open access
  • Published: 26 April 2021
  • Volume 126 , pages 5489–5508, ( 2021 )

Cite this article

You have full access to this open access article

h index of phd student

  • Pantea Kamrani   ORCID: orcid.org/0000-0002-8880-8105 1 ,
  • Isabelle Dorsch   ORCID: orcid.org/0000-0001-7391-5189 1 &
  • Wolfgang G. Stock   ORCID: orcid.org/0000-0003-2697-3225 1 , 2  

9947 Accesses

5 Citations

27 Altmetric

Explore all metrics

The h-index is a widely used scientometric indicator on the researcher level working with a simple combination of publication and citation counts. In this article, we pursue two goals, namely the collection of empirical data about researchers’ personal estimations of the importance of the h-index for themselves as well as for their academic disciplines, and on the researchers’ concrete knowledge on the h-index and the way of its calculation. We worked with an online survey (including a knowledge test on the calculation of the h-index), which was finished by 1081 German university professors. We distinguished between the results for all participants, and, additionally, the results by gender, generation, and field of knowledge. We found a clear binary division between the academic knowledge fields: For the sciences and medicine the h-index is important for the researchers themselves and for their disciplines, while for the humanities and social sciences, economics, and law the h-index is considerably less important. Two fifths of the professors do not know details on the h-index or wrongly deem to know what the h-index is and failed our test. The researchers’ knowledge on the h-index is much smaller in the academic branches of the humanities and the social sciences. As the h-index is important for many researchers and as not all researchers are very knowledgeable about this author-specific indicator, it seems to be necessary to make researchers more aware of scholarly metrics literacy.

Similar content being viewed by others

The use of cronbach’s alpha when developing and reporting research instruments in science education.

h index of phd student

Literature reviews as independent studies: guidelines for academic practice

h index of phd student

How to design bibliometric research: an overview and a framework proposal

Avoid common mistakes on your manuscript.

Introduction

In 2005, Hirsch introduced his famous h-index. It combines two important measures of scientometrics, namely the publication count of a researcher (as an indicator for his or her research productivity) and the citation count of those publications (as an indicator for his or her research impact). Hirsch ( 2005 , p. 1569) defines, “A scientist has index h if h of his or her N p papers have at least h citations each and the other ( N p   –   h ) papers have <  h citations each.” If a researcher has written 100 articles, for instance, 20 of these having been cited at least 20 times and the other 80 less than that, then the researcher’s h-index will be 20 (Stock and Stock 2013 , p. 382). Following Hirsch, the h-index “gives an estimate of the importance, significance, and broad impact of a scientist’s cumulative research contribution” (Hirsch 2005 , p. 16,572). Hirsch ( 2007 ) assumed that his h-index may predict researchers’ future achievements. Looking at this in retro-perspective, Hirsch had hoped to create an “objective measure of scientific achievement” (Hirsch 2020 , p. 4) but also starts to believe that this could be the opposite. Indeed, it became a measure of scientific achievement, however a very questionable one.

Also in 2005, Hirsch derives the m-index with the researcher’s “research age” in mind. Let the number of years after a researcher’s first publication be t p . The m-index is the quotient of the researcher’s h-index and her or his research age: m p  =  h p / t p (Hirsch 2005 , p. 16,571). An m -value of 2 would mean, for example, that a researcher has reached an h-value of 20 after 10 research years. Meanwhile, the h-index is strongly wired in our scientific system. It became one of the “standard indicators” in scientific information services and can be found on many general scientific bibliographic databases. Besides, it is used in various contexts and generated a lot of research and discussions. This indicator is used or rather misused—dependent on the way of seeing—in decisions about researchers’ career paths, e.g. as part of academics’ evaluation concerning awards, funding allocations, promotion, and tenure (Ding et al. 2020 ; Dinis-Oliveira 2019 ; Haustein and Larivière 2015 ; Kelly and Jennions 2006 ). For Jappe ( 2020 , p. 13), one of the arguments for the use of the h-index in evaluation studies is its “robustness with regards to incomplete publication and citation data.” Contrary, the index is well-known for its inconsistencies, incapability for comparisons between researchers with different career stages, and missing field normalization (Costas and Bordons 2007 ; Waltman and van Eck 2012 ). There already exist various advantages and disadvantages lists on the h-index (e.g. Rousseau et al. 2018 ). And it is still questionable what the h-index underlying concept represents, due to its conflation of the two concepts’ productivity and impact resulting in one single number (Sugimoto and Larivière, 2018 ).

It is easy to identify lots of variants of the h-index concerning both, the basis of the data as well as the concrete formula of calculation. Working with the numbers of publications and their citations, there are the data based upon the leading general bibliographical information services Web of Science (WoS), Scopus, Google Scholar, and, additionally, on ResearchGate (da Silva and Dobranszki 2018 ); working with publication numbers and the number of the publications’ reads, there are data based upon Mendeley (Askeridis 2018 ). Depending of an author’s visibility on an information service (Dorsch 2017 ), we see different values for the h-indices for WoS, Scopus, and Google Scholar (Bar-Ilan 2008 ), mostly following the inequation h( R ) WoS  < h( R ) Scopus  < h( R ) Google Scholar for a given researcher R (Dorsch et al. 2018 ). Having in mind that WoS consists of many databases (Science Citation Index Expanded, Social Science Citation Index, Arts & Humanities Citation Index, Emerging Sources Citation Index, Book Citation Index, Conference Proceedings Citation Index, etc.) and that libraries not always provide access to all (and not to all years) it is no surprise that we will find different h-indices on WoS depending on the subscribed sources and years (Hu et al. 2020 ).

After Hirsch’s publication of the two initial formulas (i.e. the h-index and the time-adjusted m-index) many scientists felt required to produce similar, but only slightly mathematically modified formulas not leading to brand-new scientific insights (Alonso et al. 2009 ; Bornmann et al. 2008 ; Jan and Ahmad 2020 ), as there are high correlations between the values of the variants (Bornmann et al. 2011 ).

How do researchers estimate the importance of the h-index? Do they really know the concrete definition and its formula? In a survey for Springer Nature ( N  = 2734 authors of Springer Nature and Biomed Central), Penny ( 2016 , slide 22) found that 67% of the asked scientists use the h-index and further 22% are aware of it but have not used it before; however, there are 10% of respondents who do not know what the h-index is. Rousseau and Rousseau ( 2017 ) asked members of the International Association of Agricultural Economists and gathered 138 answers. Here, more than two-fifth of all questionees did not know what the h-index is (Rousseau and Rousseau 2017 , p. 481). Among Taiwanese researchers ( n  = 417) 28.78% self-reported to have heard about the h-index and fully understood the indicator, whereas 22.06% never heard about it. The remaining stated to hear about it and did not know its content or only some aspects (Chen and Lin 2018 ). For academics in Ireland ( n  = 19) “journal impact factor, h-index, and RG scores” are familiar concepts, but “the majority cannot tell how these metrics are calculated or what they represent” (Ma and Ladisch 2019 , p. 214). Likewise, the interviewed academics ( n  = 9) could name “more intricate metrics like h-index or Journal Impact Factor, [but] were barely able to explain correctly how these indicators are calculated” (Lemke et al. 2019 , p. 11). The knowledge about scientometric indicators in general “is quite heterogeneous among researchers,” Rousseau and Rousseau ( 2017 , p. 482) state. This is confirmed by further studies on the familiarity, perception or usage of research evaluation metrics in general (Aksnes and Rip 2009 ; Derrick and Gillespie 2013 ; Haddow and Hammarfelt 2019 ; Hammarfelt and Haddow 2018 ).

In a blog post, Tetzner ( 2019 ) speculates on concrete numbers of a “good” h-index for academic positions. Accordingly, an h-index between 3 and 5 is good for a new assistant professor, an index between 8 and 12 for a tenured associate professor, and, finally, an index of more than 15 for a full professor. However, these numbers are gross generalizations without a sound empirical foundation. As our data are from Germany, the question arises: What kinds of tools do German funders, universities, etc. use for research evaluation? Unfortunately, there are only few publications on this topic. For scientists at German universities, bibliometric indicators (including the h-index and the impact factor) are important or very important for scientific reputation for more than 55% of the questionees (Neufeld and Johann 2016 , p.136). Those indicators have also relevance or even great relevance concerning hiring on academic positions in the estimation of more than 40% of the respondents (Neufeld and Johann 2016 , p.129). In a ranking of aspects of reputation of medical scientists, the h-index takes rank 7 (with a mean value of 3.4 with 5 being the best one) out of 17 evaluation criteria. Top-ranked indicators are the reputation of the journals of the scientists’ publications (4.1), the scientists’ citations (4.0), and their publication amount (3.7) (Krempkow et al. 2011 , p. 37). For hiring of psychology professors in Germany, the h-index had factual relevance for the tenure decision with a mean value of 3.64 (on a six-point scale) and ranks on position 12 out of more than 40 criteria for professorship (Abele-Brehm and Bühner 2016 ). Here, the number of peer-reviewed publications is top-ranked (mean value of 5.11). Obviously, these few studies highlight that the h-index indeed has relevance for research evaluation in Germany next to publication and citation numbers.

What is still a research desideratum is an in-depth description of researchers’ personal estimations on the h-index and an analysis of possible differences concerning researchers’ generation, their gender, and the discipline.

What is about the researchers’ state of knowledge on the h-index? Of course, we may ask, “What’s your knowledge on the h-index? Estimate on a scale from 1 to 5!” But personal estimations are subjective and do not substitute a test of knowledge (Kruger and Dunning 1999 ). Knowledge tests on researchers’ state of knowledge concerning the h-index are—to our best knowledge—a research desideratum, too.

In this article, we pursue two goals, namely on the one hand—similar to Buela-Casal and Zych ( 2012 ) on the impact factor—the collection of data about researchers’ personal estimations of the importance of the h-index for themselves as well as their discipline, and on the other hand data on the researchers’ concrete knowledge on the h-index and the way of its calculation. In short, these are our research questions:

RQ1: How do researchers estimate the importance of the h-index?

RQ2: What is the researchers’ knowledge on the h-index?

In order to answer RQ1, we asked researchers on their personal opinions; to answer RQ2, we additionally performed a test of their knowledge.

Online survey

Online-survey-based questionnaires provide a means of generating quantitative data. Furthermore, they ensure anonymity, and thus, a high degree of unbiasedness to bare personal information, preferences, and own knowledge. Therefore, we decided to work with an online survey. As we live and work in Germany, we know well the German academic landscape and thus restricted ourselves to professors working at a German university. We have focused on university professors as sample population (and skipped other academic staff in universities and also professors at universities of applied sciences), because we wanted to concentrate on persons who have (1) an established career path (in contrast to other academic staff) and (2) are to a high extent oriented towards publishing their research results (in contrast to professors at universities of applied science, formerly called Fachhochschulen , i.e. polytechnics, who are primarily oriented towards practice).

The online questionnaire (see Appendix 1 ) in German language contained three different sections. In Sect.  1 , we asked for personal data (gender, age, academic discipline, and university). Section  2 is on the professors’ personal estimations of the importance of publications, citations, their visibility on WoS, Scopus, and Google Scholar, the h-index on the three platforms, the importance of the h-index in their academic discipline, and, finally, their preferences concerning h-index or m-index. We chose those three information services as they are the most prominent general scientific bibliographic information services (Linde and Stock 2011 , p. 237) and all three present their specific h-index in a clearly visible way. Section  3 includes the knowledge test on the h-index and a question concerning the m-index.

In this article, we report on all aspects in relation with the h-index (for other aspects, see Kamrani et al. 2020 ). For the estimations, we used a 5-point Likert scale (from 1: very important via 3: neutral to 5: very unimportant) (Likert 1932 ). It was possible for all estimations to click also on “prefer not to say.” The test in Sect.  3 was composed of two questions, namely a subjective estimation of the own knowledge on the h-index and an objective knowledge test on this knowledge with a multiple-choice test (items: one correct answer, four incorrect ones as distractors, and the option “I’m not sure”). Those were the five items (the third one being counted as correct):

h is the quotient of the number of citations of journal articles in a reference period and the number of published journal articles in the same period;

h is the quotient of the general number of citations of articles (in a period of three years) and the number of citations of a researcher’s articles (in the same three years);

h is the number of articles by a researcher, which were cited h times at minimum;

h is the number of all citations concerning the h-index, thereof subtracted h 2 ;

h is the quotient of the number of citations of a research publication and the age of this publication.

A selected-response format for the objective knowledge test was chosen since it is recommended as the best choice for measuring knowledge (Haladyna and Rodriguez 2013 ). For the development of the knowledge test items we predominantly followed the 22 recommendations given by Haladyna and Rodriguez ( 2013 , in section II). Using a three-option multiple-choice should be superior to the four- or five-option for several reasons. However, we decided to use five options because our test only contained one question. The “I’m not sure” selection was added for the reason that our test is not a typical (classroom) assessment test. We, therefore, did not want to force an answer, for example through guessing, but rather wanted to know if participants do not know the correct answer. Creating reliable distractors can be seen as the most difficult part of the test development. Furthermore, validation is a crucial task. Here we tested and validated the question to the best of our knowledge.

As no ethical review board was involved in our research, we had to determine the ethical harmlessness of the research project ourselves and followed suggestions for ethical research applying online surveys such as consent, risk, privacy, anonymity, confidentiality, and autonomy (Buchanan and Hvizdak 2009 ). We found the e-mail addresses of the participants in a publicly accessible source (a handbook on all German faculty members, Deutscher Hochschulverband 2020 ); the participation was basically voluntary, and the participants knew that their answers became stored. At no time, participants became individually identifiable through our data collection or preparation as we strictly anonymized all questionnaires.

Participants

The addresses of the university professors were randomly extracted from the German Hochschullehrer-Verzeichnis (Deutscher Hochschulverband 2020 ). So, our procedure was non-probability sampling, more precisely convenience sampling in combination with volunteer sampling (Vehovar et al. 2016 ). Starting with volume 1 of the 2020 edition of the handbook, we randomly picked up entries and wrote the e-mails addresses down. The link to the questionnaire was distributed to every single professor by the found e-mail addresses; to host the survey we applied UmfrageOnline . To strengthen the power of the statistical analysis we predefined a minimum of 1000 usable questionnaires. The power tables provided by Cohen ( 1988 ) have a maximum of n  = 1000 participants. Therefore, we chose this value of the sample size to ensure statistically significant results, also for smaller subsets as single genders, generations, and disciplines (Cohen 1992 ). We started the mailing in June 2019 and stopped it in March 2020, when we had response of more than 1000 valid questionnaires. All in all we contacted 5722 professors by mail and arrived at 1081 completed questionnaires, which corresponds to a response rate of 18.9%.

Table 1 shows a comparison between our sample of German professors at universities with the population as one can find it in the official statistics (Destatis 2019 ). There are only minor differences concerning the gender distribution and also few divergences concerning most disciplines; however, Table 1 exhibits two huge differences. In our sample, we find more (natural) scientists than in the official statistics and less scholars in the humanities and the social sciences.

In our analysis, we distinguished always between the results for all participants, and, additionally, the results by gender (Geraci et al. 2015 ), generation (Fietkiewicz et al. 2016 ), and the field of knowledge (Hirsch and Buela-Casal 2014 ). We differentiated two genders (men, women) (note the questionnaire also provided the options “diverse” and “prefer not to say,” which were excluded from further calculations concerning gender), four generations: Generation Y (born after 1980), Generation X (born between 1960 and 1980), Baby Boomers (born after 1946 and before 1960), Silent Generation (born before 1946), and six academic disciplines: (1) geosciences, environmental sciences, agriculture, forestry, (2) humanities, social sciences, (3) sciences (including mathematics), (4) medicine, (5) law, and (6) economics. This division of knowledge fields is in line with the faculty structure of many German universities. As some participants answered some questions with “prefer not to say” (which was excluded from further calculations), the sum of all answers is not always 1081.

As our Likert scale is an ordinal scale, we calculated in each case the median as well as the interquartile range (IQR). For the analysis of significant differences we applied the Mann–Whitney u-test (Mann and Whitney 1947 ) (for the two values of gender) and the Kruskall–Wallis h-test (Kruskal and Wallis 1952 ) (for more than two values as the generations and academic disciplines). The data on the researchers’ knowledge on the h-index are on a nominal scale, so we calculated relative frequencies for three values (1: researcher knows the h-index in her/his self-estimation and passed the test; 2: researcher does not know the h-index in her/his self-estimation; 3: researcher knows the h-index in her/his self-estimation and failed the test) and used chi-squared test (Pearson 1900 ) for the analysis of differences between gender, knowledge area, and generation. We distinguish between three levels of statistical significance, namely *: p  ≤ 0.05 (significant), **: p  ≤ 0.01 (very significant), and ***: p  ≤ 0.001 (extremely significant); however, one has to interpret such values always with caution (Amrhein et al. 2019 ). All calculations were done with the help of SPSS (see a sketch of the data analysis plan in Appendix 2 ).

Researchers’ estimations of the h-index

How do researchers estimate the importance of the h-index for their academic discipline? And how important is the h-index (on WoS, Scopus, and Google Scholar) for themselves? In this paragraph, we will answer our research question 1.

Table 2 shows the different researcher estimations of the importance of the h-index concerning their discipline. While for all participants the h-index is “important” (2) for their academic field (median 2, IQA 1), there are massive and extremely significant differences between the single disciplines. For the sciences, medicine, and geosciences (including environmental sciences, agriculture, and forestry) the h-index is much more important (median 2, IQA 1) than for economics (median 3, IQA 1), humanities and social sciences (median 4, IQA 2), and law (median 5, IQA 0). The most votes for “very important” come from medicine (29.1%), the least from the humanities and social sciences (1.0%) as well as from law (0.0%). Conversely, the most very negative estimations (5: “very unimportant”) can be found among lawyers (78.6%) and scholars from the humanities and social sciences (30.4%). There is a clear cut between sciences (including geosciences, etc., and medicine) on one hand and humanities and all social sciences (including law and economics) on the other hand—with a stark importance of the h-index for the first-mentioned disciplines and a weak importance of the h-index for the latter.

In Tables 3 , 4 and 5 we find the results for the researchers’ estimations of the importance of their h-index on WoS (Table 3 ), Scopus (Table 4 ), and Google Scholar (Table 5 ). For all participants, the h-index on WoS is the most important one (median 2; however, with a wide dispersion of IQR 3), leaving Scopus and Google Scholar behind it (median 3, IQR 2 for both services). For all three bibliographic information services, the estimations of men and women do not differ in the statistical picture. For scientists (including geoscientists, etc.), a high h-index on WoS and Scopus is important (median 2); interestingly, economists join scientists when it comes to the importance of the h-index on Google Scholar (all three disciplines having a median of 2). For scholars from humanities and social sciences, the h-indices on all three services are unimportant (median 4), for lawyers they are even very unimportant (median 5). For researchers in the area of medicine there is a decisive ranking: most important is their h-index on WoS (median 2, IQR 2, and 41.5% votes for “very important”), followed by Scopus (median 2, IQA 1, but only 18.4% votes for “very important”), and, finally, Google Scholar (median 3, IQR 1, and the modus also equals 3, “neutral”). For economists, the highest share of (1)-votes (“very important”) is found for Google Scholar (29.9%) in contrast to the fee-based services WoS (19.7%) and Scopus (12.2%).

Similar to the results of the knowledge areas, there is also a clear result concerning the generations. The older a researcher, the less important is his or her h-index for him- or herself. We see a declining number of (1)-votes in all three information services, and a median moving over the generations from 2 to 3 (WoS), 2 to 4 (Scopus), and 2 to 3 (Google Scholar). The youngest generation has a preference for the h-index on Google Scholar ((1)-votes: 34.9%) over the h-indices on WoS ((1)-votes: 25.9%) and Scopus ((1)-votes: 19.8%).

A very interesting result of our study are the impressive differences of the importance estimations of the h-index by discipline (Fig.  1 ). With three tiny exceptions, the estimations for the general importance and the importance of the h-indices on WoS, Scopus, and Google Scholar are consistent inside each scientific disciplines. For the natural sciences, geosciences etc., and medicine, the h-index is important (median 2), for economics, it is neutral (median 3), for the humanities and social sciences it is unimportant (median 4), and, finally, for law this index is even very unimportant (median 5).

figure 1

Researchers’ estimations of the h-index by discipline (medians). N  = 1001 (general importance), N  = 961 (WoS), N  = 946 (Scopus), N  = 966 (Google Scholar); Scale: (1) very important, (2) important, (3) neutral, (4) unimportant, (5) very unimportant

We do not want to withhold a by-result on the estimation on a modification of the h-index by the time-adjusted m-index. 567 participants made a decision: for 50.8% of them the h-index is the better one, 49.2% prefer the m-index. More women (61.1%) than men (47.3%) choose the m-index over the original h-index. All academic disciplines except one prefer the m-index; scientists are the exception (only 42.8% approval for the m-index). For members of Generation Y, Baby Boomers, and Silent Generation the m-index is the preferable index; Generation X prefers mainly (54.3%) the h-index. Inside the youngest generation, Generation Y (being discriminated by the h-index), the majority of researchers (65.5%) likes the m-index more than the h-index.

Researchers’ state of knowledge on the h-index

Answering our research question 2, the overall result is presented in Fig.  2 . This is a combination of three questions, as we initially asked the researchers regarding their personal estimations of their general familiarity (Appendix 1 , Q10) and calculation knowledge (Q13) on the h-index. Only participants who confirmed that they have knowledge on the indicators’ calculation (Q10 and Q13) made the knowledge test (Q14). About three fifths of the professors know the h-index in their self-estimations and passed the test, one third of all answering participants does not know the h-index following their self-estimations, and, finally, 7.2% wrongly estimated their knowledge on the h-index, as they failed the test but meant to know it.

figure 2

Researchers’ state of knowledge on the h-index: The basic distribution. N  = 1017

In contrast to many of our results concerning the researchers’ estimation of the importance of the h-index we see differences in the knowledge on the h-index by gender (Table 6 ). Only 41.6% of the women have justified knowledge (men: 64.6%), 50.0% do not know the definition or the formula of the h-index (men: 28.7%), and 8.3% wrongly estimate their knowledge as sufficient (men: 6.9%). However, these differences are statistically not significant.

In the sciences (incl. geosciences, etc.) and in medicine, more than 70% of the participants do know how to calculate the h-index. Scientists have the highest level of knowledge on the h-index (79.1% passed the knowledge test). Participants from the humanities and social sciences (21.1%) as well as from law (7.1%) exhibit the lowest states of knowledge concerning the h-index. With a share of 48.3%, economists take a middle position between the two main groups of researchers; however, there are 13.8% of economists who wrongly overestimate their knowledge state.

We found a clear result concerning the generations: the older the researcher the less is the knowledge on the h-index. While 62.9% of the Generation X know the calculation of the h-index, only 53.2% of the Baby Boomers possess this knowledge. The differences in the states of the researchers’ knowledge on the h-index within the knowledge areas and generations are extremely significant each.

Main results

Our main results are on the researchers’ estimations of the h-index and their state of knowledge on this scientometric indicator. We found a clear binary division between the academic knowledge fields: For the sciences (including geosciences, agriculture, etc.) and medicine the h-index is important for the researchers themselves and for their disciplines, while for the humanities and social sciences, economics, and law the h-index is considerably less important. For the respondents from the sciences and medicine, the h-index on WoS is most important, followed by the h-index of Google Scholar and Scopus. Surprisingly, for economists Google Scholar’s h-index is very attractive. We did not find significant differences between the estimations of the importance of the h-index between men and women; however, there are differences concerning the generations: the older the participants the less important they estimate the importance of the h-index.

Probably, for older professors the h-index has not the same significance as for their younger colleagues, as they are not so much in need to plan their further career or to apply for new research projects. On average, for researchers aged 60 and more, their productivity declines in contrast to younger colleagues (Kyvik 1990 ). And perhaps some of them simply do not know the existence of more recent services and of new scientometric indicators. Younger researchers are more tolerant of novelty in their work (Packalen and Bhattachrya 2015 ), and such novelty includes new information services (as Scopus and Google Scholar) as well as new indicators (as the h-index). It is known that young researchers rely heavily on search engines like Google (Rowlands et al. 2008 ), which partly may explain the high values for Google Scholar especially from Generation Y. Furthermore, the increasing publication pressure and the h-index utilization for decisions about early career researchers’ work-related paths thus also impact the importance of the indicator for those young professors (Farlin and Majewski 2013 ).

All in all, two fifths of the professors do not know the concrete calculation of the h-index or—which is rather scary—wrongly deem to know what the h-index is and failed our simple knowledge test. The women do even worse, as only about two fifths really know what the h-index is and how it is defined and calculated, but we should have in mind that this gender difference is statistically not significant. The older the researcher, the higher is the share of participants who do not know the definition and calculation of the h-index. The researchers’ knowledge on the h-index is much smaller in the academic disciplines of the humanities and the social sciences.

The h-index in the academic areas

Especially the obvious differences between the academic areas demand further explanation. Participants from the natural sciences and from medicine estimate the importance of the h-index as “important” or even “very important,” and they know details on this indicator to a high extend. The participants from the humanities, the social sciences, economics, and law are quite different. They estimate the h-index’ importance as “neutral,” “unimportant,” or even as “very unimportant,” and the share of researchers with profound knowledge on the h-index is quite low. Haddow and Hammarfelt ( 2019 ) also report a lower use of the h-index within these fields. Similar to our study, especially researchers in the field of law ( n  = 24) did not make use of the h-index. All researchers publish and all cite, too. There are differences in their publication channels, as scientists publish mostly in journals and researchers from the humanities publish in monographs and sometimes also in journals (Kulczycki et al. 2018 ), but this may not explain the differences concerning the importance of and the knowledge state on the h-index. Furthermore, more information on how such researchers’ h-index perceptions through different disciplines comply with the h-index (mis)usage for research evaluation within those disciplines would add another dimension to this topic.

The indeed very large general information services WoS and Scopus are, compared to personal literature lists of researchers, quite incomplete (Hilbert et al. 2015 ). There is also a pronounced unequal coverage of certain disciplines (Mongeon and Paul-Hus 2016 ) and many languages (except English) (Vera-Baceta et al. 2019 ). Perhaps these facts, in particular, prevent representatives of the disadvantaged disciplines and languages (including German—and we asked German professors) from a high estimation of the relevance of their h-index as important on these platforms. Then, however, the rejection of the h-index of Google Scholar, which can also be seen, is surprising, because this information service is by far the most complete (Martin-Martin et al. 2018 ). However, economists are very well informed here, as they—as the only academic representatives—highly value their h-index at Google Scholar. On the other hand, the use of Google Scholar for research evaluation is discussed in general. Although its coverage is usually broader than those provided by more controlled databases and steadily expanding its collection, there exist widely known issues, for example, its low accuracy (Halevi et al. 2017 ). Depending on a researcher’s own opinion on this topic, this could be a reason for seeing no importance in the h-index provided by Google Scholar as well.

Another attempt for an explanation may be the different cultures in the different research areas. For Kagan ( 2009 , p. 4), natural scientists see their main interest in explanation and prediction, while for humanists it is understanding (following Snow 1959 and Dilthey 1895 , p. 10). The h-index is called an indicator allowing explanation and prediction of scientific achievement (Hirsch 2007 ); it is typical for the culture of natural sciences. Researchers from the natural science and from medicine are accustomed to numbers, while humanists seldom work quantitatively. In the humanities, other indicators such as book reviews and the quality of book publishers are components for their research evaluation; however, such aspects are not reflected by the h-index. And if humanities scholars are never asked for their h-index, why should they know or use it?

Following Kagan ( 2009 , p. 5) a second time, humanists exhibit only minimal dependence on outside support and natural scientists are highly dependent on external sources of financing. The h-index can work as an argument for the allocation of outside support. So for natural scientists the h-index is a very common fabric and they need it for their academic survival; humanists are not as familiar with numerical indicators and for them the h-index is not so much-needed as for their colleagues from the science and medicine faculties. However, this dichotomous classification of research and researchers may be an oversimplifying solution (Kowalski and Mrdjenovich 2016 ) and there is a trend in consulting and using such research evaluation indicators in the humanities and social sciences, too. For preparing a satisfying theory of researchers’ behavior concerning the h-index (or, in general, concerning scientometric indicators)—also in dependence on their background in an academic field—more research is needed.

Limitations, outlook, and recommendations

A clear limitation of the study is our studied population, namely university professors from Germany. Of course, researchers in other countries should be included in further studies. It seems necessary to broaden the view towards all researchers and all occupational areas, too, including, for instance, also lecturers in polytechnics and researchers in private companies. Another limitation is the consideration of only three h-indices (of WoS, Scopus, and Google Scholar). As there are other databases for the calculation of an h-index (e.g., ResearchGate) the study should be broadened to all variants of the h-index.

Another interesting research question may be: Are there any correlations between the estimations of the importance of the h-index or the researcher’s knowledge on the h-index and the researcher’s own h-index? Does a researcher with a high h-index on, for instance, WoS, estimate the importance of this indicator higher than a researcher with a low h-index? Hirsch ( 2020 ) speculates that people with high h-indexes are more likely to think that this indicator is important. A more in-depth analysis on the self-estimation of researchers’ h-index knowledge might also consider the Dunning-Kruger effect, showing certain people can be wrongly confident about their limited knowledge within a domain and not having the ability to realize this (Kruger and Dunning 1999 ).

As the h-index has still an important impact on the evaluation of scientists and as not all researchers are very knowledgeable about this author-specific research indicator, it seems to be a good idea to strengthen their knowledge in the broader area of “metric-wiseness” (Rousseau et al. 2018 ; Rousseau and Rousseau 2015 ). With a stronger focus on educating researchers and research support staff in terms of the application and interpretation of metrics as well as to reduce misuse of indicators, Haustein ( 2018 ) speaks about better (scholarly) “metrics literacies.” Following Hammarfelt and Haddow ( 2018 ), we should further discuss possible effects of indicators within the “metrics culture.” Likewise, this also applies to all knowledgeable researchers as well as research evaluators who also may or may not be researchers by themselves. Here, the focus rather lies to raise awareness for metrics literacies and to foster fair research evaluation practices not incorporating any kind of misuse. This leads directly to a research gap in scientometrics. Further research on concrete data about the level of researchers’ knowledge not only concerning the h-index, but also on other indicators such as WoS’s impact factor, Google’s i-10 index, Scopus’ CiteScore, the source normalized impact per paper (SNIP), etc., also in a comparative perspective would draw a more comprehensive picture on the current indicator knowledge. All the meanwhile “classical” scientometric indicators are based upon publication and citation measures (Stock 2001 ). Alternative indicators are available today, which are based upon social media metrics, called “altmetrics” (Meschede and Siebenlist 2018 ; Thelwall et al. 2013 ). How do researchers estimate the importance of these alternative indicators and do they know their definitions and their formulae of calculation? First insights on this give Lemke et al. ( 2019 ), also in regard to researchers’ personal preferences and concerns.

Following Hirsch ( 2020 ), the h-index is by no means a valid indicator of research quality; however, it is very common especially in the sciences and medicine. Probably, it is a convenient indicator for some researchers who want to avoid the hassle of laborious and time-consuming reviewing and scrutinizing other researchers’ œuvre. Apart from its convenience and popularity, and seen from an ethical perspective, one should consider what significance a single metric should have and how we—in general—want to further shape the future of research evaluation.

Abele-Brehm, A., & Bühner, M. (2016). Wer soll die Professur bekommen? Eine Untersuchung zur Bewertung von Auswahlkriterien in Berufungsverfahren der Psychologie. Psychologische Rundschau, 67 , 250–261. https://doi.org/10.1026/0033-3042/a000335 .

Article   Google Scholar  

Aksnes, D. W., & Rip, A. (2009). Researchers’ perceptions of citations. Research Policy, 38 (6), 895–905. https://doi.org/10.1016/j.respol.2009.02.001 .

Amrhein, V., Greenland, S., & McShane, B. (2019). Retire statistical significance. Nature, 567 (7748), 305–307. https://doi.org/10.1038/d41586-019-00857-9 .

Askeridis, J. (2018). An h index for Mendeley: Comparison of citation-based h indices and a readership-based h men index for 29 authors. Scientometrics, 117 , 615–624. https://doi.org/10.1007/s11192-018-2882-8 .

Bar-Ilan, J. (2008). Which h-index?—A comparison of WoS, Scopus and Google Scholar. Scientometrics, 74 (2), 257–271. https://doi.org/10.1007/s11192-008-0216-y .

Bornmann, L., Mutz, R., & Daniel, H.-D. (2008). Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology, 59 (5), 830–837. https://doi.org/10.1002/asi.20806 .

Bornmann, L., Mutz, R., Hug, S. E., & Daniel, H.-D. (2011). A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants. Journal of Informetrics, 5 , 346–359. https://doi.org/10.1016/j.joi.2011.01.006 .

Buchanan, E. A., & Hvizdak, E. E. (2009). Online survey tools: Ethical and methodological concerns of human research ethics committees. Journal of Empirical Research on Human Research Ethics, 4 (2), 37–48. https://doi.org/10.1525/jer.2009.4.2.37 .

Buela-Casal, G., & Zych, I. (2012). What do the scientists think about the impact factor? Scientometrics, 92 , 281–292. https://doi.org/10.1007/s11192-012-0676-y .

Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E., & Herrera, F. (2009). H-index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informetrics, 3 (4), 273–289. https://doi.org/10.1016/j.joi.2009.04.001 .

Chen, C. M.-L., & Lin, W.-Y. C. (2018). What indicators matter? The analysis of perception toward research assessment indicators and Leiden Manifesto. The case study of Taiwan. In R. Costas, T. Franssen, & A. Yegros-Yegros (Eds.), Proceedings of the 23rd International Conference on Science and Technology Indicators (STI 2018) (12–14 September 2018) (pp. 688–698). Leiden, NL: Centre for Science and Technology Studies (CWTS). https://openaccess.leidenuniv.nl/bitstream/handle/1887/65192/STI2018_paper_121.pdf?sequence=1.

Cohen, J. (1988). Statistical power analysis for the behavioral science . (2nd ed.). Hillsdale: Lawrence Erlbaum. https://doi.org/10.4324/9780203771587 .

Book   MATH   Google Scholar  

Cohen, J. (1992). A power primer. Psychological Bulletin, 112 (1), 155–159. https://doi.org/10.1037//0033-2909.112.1.155 .

Costas, R., & Bordons, M. (2007). The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of Informetrics, 1 (3), 193–203. https://doi.org/10.1016/j.joi.2007.02.001 .

da Silva, J. A. T., & Dobranszki, J. (2018). Multiple versions of the h-index: Cautionary use for formal academic purposes. Scientometrics, 115 (2), 1107–1113. https://doi.org/10.1007/s11192-018-2680-3 .

Derrick, G. E., & Gillespie, J. (2013). A number you just can’t get away from: Characteristics of adoption and the social construction of metric use by researchers. In S. Hinze & A. Lottman (Eds.), Proceedings of the 18th International Conference on Science and Technology Indicators (pp. 104–116). Berlin, DE: Institute for Research Information and Quality Assurance. http://www.forschungsinfo.de/STI2013/download/STI_2013_Proceedings.pdf.

Destatis. (2019). Bildung und Kultur. Personal an Hochschulen (Fachserie 11, Reihe 4.4). Wiesbaden, Germany: Statistisches Bundesamt. https://www.destatis.de/DE/Themen/Gesellschaft-Umwelt/Bildung-Forschung-Kultur/Hochschulen/Publikationen/Downloads-Hochschulen/personal-hochschulen-2110440187004.html.

Deutscher Hochschulverband. (2020). Hochschullehrer-Verzeichnis 2020, Band 1: Universitäten Deutschland. 28th Ed. Berlin, New York: De Gruyter Saur. https://db.degruyter.com/view/product/549953.

Dilthey, W. (1895). Ideen über eine beschreibende und zergliedernde Psychologie. Sitzungsberichte der königlich preussischen Akademie der Wissenschaften zu Berlin, 7. Juni 1894, Ausgabe XXVI, Sitzung der philosophisch historischen Classe , 1–88. http://www.uwe-mortensen.de/Dilthey%20Ideen%20beschreibendezergliederndePsychologie.pdf.

Ding, J., Liu, C., & Kandonga, G. A. (2020). Exploring the limitations of the h-index and h-type indexes in measuring the research performance of authors. Scientometrics, 122 (3), 1303–1322. https://doi.org/10.1007/s11192-020-03364-1 .

Dinis-Oliveira, R. J. (2019). The h-index in life and health sciences: Advantages, drawbacks and challenging opportunities. Current Drug Research Reviews, 11 (2), 82–84. https://doi.org/10.2174/258997751102191111141801 .

Dorsch, I. (2017). Relative visibility of authors’ publications in different information services. Scientometrics, 112 , 917–925. https://doi.org/10.1007/s11192-017-2416-9 .

Dorsch, I., Askeridis, J., & Stock, W. G. (2018). Truebounded, overbounded, or underbounded? Scientists’ personal publication lists versus lists generated through bibliographic information services. Publications, 6 (1), 1–9. https://doi.org/10.3390/publications6010007 .

Farlin, J., & Majewski, M. (2013). Performance indicators: The educational effect of publication pressure on young researchers in environmental sciences. Environmental Science and Technology, 47 (6), 2437–2438. https://doi.org/10.1021/es400677m .

Fietkiewicz, K. J., Lins, E., Baran, K. S., & Stock, W. G. (2016). Inter-generational comparison of social media use: Investigating the online behavior of different generational cohorts. In Proceedings of the 49th Hawaii international conference on system sciences (pp. 3829–3838). Washington, DC: IEEE Computer Society. https://doi.org/10.1109/HICSS.2016.477.

Geraci, L., Balsis, S., & Busch, A. J. B. (2015). Gender and the h index in psychology. Scientometrics, 105 (3), 2023–2043. https://doi.org/10.1007/s11192-015-1757-5 .

Haddow, G., & Hammarfelt, B. (2019). Quality, impact, and quantification: Indicators and metrics use by social scientists. Journal of the Association for Information Science and Technology, 70 (1), 16–26. https://doi.org/10.1002/asi.24097 .

Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items . New York: Routledge. https://doi.org/10.4324/9780203850381 .

Book   Google Scholar  

Halevi, G., Moed, H., & Bar-Ilan, J. (2017). Suitability of Google Scholar as a source of scientific information and as a source of data for scientific evaluation. Review of the literature. Journal of Informetrics, 11 (3), 823–834. https://doi.org/10.1016/j.joi.2017.06.005 .

Hammarfelt, B., & Haddow, G. (2018). Conflicting measures and values: How humanities scholars in Australia and Sweden use and react to bibliometric indicators. Journal of the Association for Information Science and Technology, 69 (7), 924–935. https://doi.org/10.1002/asi.24043 .

Haustein, S. (2018). Metrics literacy [Blog post]. https://stefaniehaustein.com/metrics-literacy/

Haustein, S., & Larivière, V. (2015). The use of bibliometrics for assessing research: Possibilities, limitations and adverse effects. In I. Welpe, J. Wollersheim, S. Ringelhan, & M. Osterloh (Eds.), Incentives and performance: Governance of research organizations (pp. 121–139). Cham, CH: Springer. https://doi.org/10.1007/978-3-319-09785-5_8

Hilbert, F., Barth, J., Gremm, J., Gros, D., Haiter, J., Henkel, M., Reinhardt, W., & Stock, W. G. (2015). Coverage of academic citation databases compared with coverage of social media: Personal publication lists as calibration parameters. Online Information Review, 39 (2), 255–264. https://doi.org/10.1108/OIR-07-2014-0159 .

Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102 (46), 16569–16572. https://doi.org/10.1073/pnas.0507655102 .

Article   MATH   Google Scholar  

Hirsch, J. E. (2007). Does the h index have predictive power? Proceedings of the National Academy of Sciences of the United States of America, 104 (49), 19193–19198. https://doi.org/10.1073/pnas.0707962104 .

Hirsch, J. E. (2020). Superconductivity, What the h? The emperor has no clothes. Physics and Society, 49 (1), 4–9.

Google Scholar  

Hirsch, J. E., & Buela-Casal, G. (2014). The meaning of the h-index. International Journal of Clinical and Health Psychology, 14 (2), 161–164. https://doi.org/10.1016/S1697-2600(14)70050-X .

Hu, G. Y., Wang, L., Ni, R., & Liu, W. S. (2020). Which h-index? An exploration within the Web of Science. Scientometrics, 123 , 1225–1233. https://doi.org/10.1007/s11192-020-03425-5 .

Jan, R., & Ahmad, R. (2020). H-index and its variants: Which variant fairly assess author’s achievements. Journal of Information Technology Research, 13 (1), 68–76. https://doi.org/10.4018/JITR.2020010105 .

Jappe, A. (2020). Professional standards in bibliometric research evaluation? A meta-evaluation of European assessment practice 2005–2019. PLoSONE, 15 (4), 1–23. https://doi.org/10.1371/journal.pone.0231735 .

Kagan, J. (2009). The three cultures. Natural sciences, social sciences, and the humanities in the 21st century . Cambridge, MA: Cambridge University Press. https://www.cambridge.org/de/academic/subjects/psychology/psychology-general-interest/three-cultures-natural-sciences-social-sciences-and-humanities-21st-century?format=HB&isbn=9780521518420.

Kamrani, P., Dorsch, I., & Stock, W. G. (2020). Publikationen, Zitationen und H-Index im Meinungsbild deutscher Universitätsprofessoren. Beiträge zur Hochschulforschung, 42 (3), 78–98. https://www.bzh.bayern.de/fileadmin/user_upload/Publikationen/Beitraege_zur_Hochschulforschung/2020/3_2020_Kamrani-Dorsch-Stock.pdf .

Kelly, C. D., & Jennions, M. D. (2006). The h index and career assessment by numbers. Trends in Ecology and Evolution, 21 (4), 167–170. https://doi.org/10.1016/j.tree.2006.01.005 .

Kowalski, C. J., & Mrdjenovich, A. J. (2016). Beware dichotomies. Perspectives in Biology and Medicine, 59 (4), 517–535. https://doi.org/10.1353/pbm.2016.0045 .

Krempkow, R., Schulz, P., Landrock, U., & Neufeld, J. (2011). Die Sicht der Professor/innen auf die Leistungsorientierte Mittelvergabe an Medizinischen Fakultäten in Deutschland . Berlin: iFQ–Institut für Forschungsinformation und Qualitätssicherung. http://www.forschungsinfo.de/Publikationen/Download/LOM_Professorenbefragung.pdf.

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77 (6), 1121–1134. https://doi.org/10.1037/0022-3514.77.6.1121 .

Kruskal, W. H., & Wallis, W. A. (1952). Use of ranks in one-criterion variance analysis. Journal of the American Statistical Association, 47 (260), 583–621. https://doi.org/10.1080/01621459.1952.10483441 .

Kulczycki, E., Engels, T. C. E., Pölönen, J., Bruun, K., Dušková, M., Guns, R., Nowotniak, R., Petr, M., Sivertsen, G., Istenič Starčič, A., & Zuccala, A. (2018). Publication patterns in the social sciences and humanities: Evidence from eight European countries. Scientometrics, 116 (1), 463–486. https://doi.org/10.1007/s11192-018-2711-0 .

Kyvik, S. (1990). Age and scientific productivity. Differences between fields of learning. Higher Education, 19 , 37–55. https://doi.org/10.1007/BF00142022 .

Lemke, S., Mehrazar, M., Mazarakis, A., & Peters, I. (2019). “When you use social media you are not working”: Barriers for the use of metrics in Social Sciences. Frontiers in Research Metrics and Analytics, 3 (39), 1–18. https://doi.org/10.3389/frma.2018.00039 .

Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 22 (140), 5–55.

Linde, F., & Stock, W. G. (2011). Information markets . Berlin, New York: De Gruyter Saur. https://doi.org/10.1515/9783110236101 .

Ma, L., & Ladisch, M. (2019). Evaluation complacency or evaluation inertia? A study of evaluative metrics and research practices in Irish universities. Research Evaluation, 28 (3), 209–217. https://doi.org/10.1093/reseval/rvz008 .

Mann, H., & Whitney, D. (1947). On a test of whether one of two random variables is stochastically larger than the other. Annals of Mathematical Statistics, 18 (1), 50–60. https://doi.org/10.1214/aoms/1177730491 .

Article   MathSciNet   MATH   Google Scholar  

Martin-Martin, A., Orduna-Malea, E., Thelwall, M., & Lopez-Cozar, E. D. (2018). Google Scholar, Web of Science, and Scopus: A systematic comparison of citations in 252 subject categories. Journal of Informetrics, 12 (4), 1160–1177. https://doi.org/10.1016/j.joi.2018.09.002 .

Meschede, C., & Siebenlist, T. (2018). Cross-metric compatibility and inconsistencies of altmetrics. Scientometrics, 115 (1), 283–297. https://doi.org/10.1007/s11192-018-2674-1 .

Mongeon, P., & Paul-Hus, A. (2016). The journal coverage of Web of Science and Scopus: A comparative analysis. Scientometrics, 106 (1), 213–228. https://doi.org/10.1007/s11192-015-1765-5 .

Neufeld, J., & Johann, D. (2016). Wissenschaftlerbefragung 2016. Variablenbericht – Häufigkeitsauszählung . Berlin: Deutsches Zentrum für Hochschul- und Wissenschaftsforschung. https://www.volkswagenstiftung.de/sites/default/files/downloads/Wissenschaftlerbefragung%202016%20-%20Variablenbericht%20-%20H%C3%A4ufigkeitsausz%C3%A4hlungen.pdf

Packalen, M., & Bhattacharya, J. (2015). Age and the trying out of new ideas. Cambridge, MA: National Bureau of Economic Research. (NBER Working Paper Series; 20920). http://www.nber.org/papers/w20920.

Pearson, K. (1900). On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. Series, 50 (302), 157–175.

Penny, D. (2016). What matters where? Cultural and geographical factors in science. Slides presented at 3rd altmetrics conference, Bucharest, 2016. https://figshare.com/articles/What_matters_where_Cultural_and_geographical_factors_in_science/3969012 .

Rousseau, R., Egghe, L., & Guns, R. (2018). Becoming metric-wise: A bibliometric guide for researchers . Cambridge, MA: Chandos.

Rousseau, S., & Rousseau, R. (2015). Metric-wiseness. Journal of the Association for Information Science and Technology, 66 (11), 2389. https://doi.org/10.1002/asi.23558 .

Rousseau, S., & Rousseau, R. (2017). Being metric-wise: Heterogeneity in bibliometric knowledge. El Profesional de la Informatión, 26 (3), 480–487.

Rowlands, I., Nicholas, D., William, P., Huntington, P., Fieldhouse, M., Gunter, B., Withey, R., Jamali, H. R., Dobrowolski, T., & Tenopir, C. (2008). The Google generation: The information behaviour of the researcher of the future. Aslib Proceedings, 60 (4), 290–310. https://doi.org/10.1108/00012530810887953 .

Snow, C. P. (1959). The two cultures and the scientific revolution . Cambridge: Cambridge University Press.

Stock, W. G. (2001). Publikation und Zitat. Die problematische Basis empirischer Wissenschaftsforschung. Köln: Fachhochschule Köln; Fachbereich Bibliotheks- und Informationswesen (Kölner Arbeitspapiere zur Bibliotheks- und Informationswissenschaft; 29). https://epb.bibl.th-koeln.de/frontdoor/deliver/index/docId/62/file/Stock_Publikation.pdf.

Stock, W. G., & Stock, M. (2013). Handbook of information science . De Gruyter Saur. https://doi.org/10.1515/9783110235005 .

Sugimoto, C. R., & Larivière, V. (2018). Measuring research: What everyone needs to know . New York: Oxford University Press.

Tetzner, R. (2019). What is a good h-index required for an academic position? [Blog post]. https://www.journal-publishing.com/blog/good-h-index-required-academic-position/.

Thelwall, M., Haustein, S., Larivière, V., & Sugimoto, C. R. (2013). Do altmetrics work? Twitter and ten other social web services. PLoS ONE, 8 (5), e64841. https://doi.org/10.1371/journal.pone.0064841 .

Vehovar, V., Toepoel, V., & Steinmetz, S. (2016). Non-probability sampling. In C. Wolf, D. Joye, T. W. Smith, & Y.-C. Fu (Eds.), The SAGE handbook of survey methodology. (pp. 327–343). London: Sage. https://doi.org/10.4135/9781473957893.n22 .

Chapter   Google Scholar  

Vera-Baceta, M. A., Thelwall, M., & Kousha, K. (2019). Web of science and Scopus language coverage. Scientometrics, 121 (3), 1803–1813. https://doi.org/10.1007/s11192-019-03264-z .

Waltman, L., & van Eck, N. J. (2012). The inconsistency of the h-index. Journal of the American Society for Information Science and Technology, 63 (2), 406–415. https://doi.org/10.1002/asi.21678 .

Download references

Open Access funding enabled and organized by Projekt DEAL. No external funding.

Author information

Authors and affiliations.

Department of Information Science, Heinrich Heine University Düsseldorf, Düsseldorf, Germany

Pantea Kamrani, Isabelle Dorsch & Wolfgang G. Stock

Department of Operations and Information Systems, Karl Franzens University Graz, Graz, Austria

Wolfgang G. Stock

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: PK, ID, WGS; Methodology: PK, ID, WGS; Data collection: PK; Writing and editing: PK, ID, WGS; Supervision: WGS.

Corresponding author

Correspondence to Wolfgang G. Stock .

Ethics declarations

Conflict of interest.

No conflicts of interest, no competing interests.

Appendix 1: List of all questions (translated from German)

figure a

Appendix 2: Data analysis plan (intuitive sketch)

figure b

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Kamrani, P., Dorsch, I. & Stock, W.G. Do researchers know what the h-index is? And how do they estimate its importance?. Scientometrics 126 , 5489–5508 (2021). https://doi.org/10.1007/s11192-021-03968-1

Download citation

Received : 19 May 2020

Accepted : 23 March 2021

Published : 26 April 2021

Issue Date : July 2021

DOI : https://doi.org/10.1007/s11192-021-03968-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Bibliometrics
  • Researchers
  • Generations
  • Knowledge fields
  • Google Scholar
  • Metrics literacy
  • Find a journal
  • Publish with us
  • Track your research

Elsevier QRcode Wechat

  • Publication Recognition

What is a Good H-index?

  • 4 minute read
  • 362.5K views

Table of Contents

You have finally overcome the exhausting process of a successful paper publication and are just thinking that it’s time to relax for a while. Maybe you are right to do so, but don’t take very long…you see, just like the research process itself, pursuing a career as an author of published works is also about expecting results. In other words, today there are tools that can tell you if your publication(s) is/are impacting the number of people you believed it would (or not). One of the most common tools researchers use is the H-index score.

Knowing how impactful your publications are among your audience is key to defining your individual performance as a researcher and author. This helps the scientific community compare professionals in the same research field (and career length). Although scoring intellectual activities is often an issue of debate, it also brings its own benefits:

  • Inside the scientific community: A standardization of researchers’ performances can be useful for comparison between them, within their field of research. For example, H-index scores are commonly used in the recruitment processes for academic positions and taken into consideration when applying for academic or research grants. At the end of the day, the H-index is used as a sign of self-worth for scholars in almost every field of research.
  • In an individual point of view: Knowing the impact of your work among the target audience is especially important in the academic world. With careful analysis and the right amount of reflection, the H-index can give you clues and ideas on how to design and implement future projects. If your paper is not being cited as much as you expected, try to find out what the problem might have been. For example, was the research content irrelevant for the audience? Was the selected journal wrong for your paper? Was the text poorly written? For the latter, consider Elsevier’s text editing and translation services in order to improve your chances of being cited by other authors and improving your H-index.

What is my H-index?

Basically, the H-index score is a standard scholarly metric in which the number of published papers, and the number of times their author is cited, is put into relation. The formula is based on the number of papers (H) that have been cited, and how often, compared to those that have not been cited (or cited as much). See the table below as a practical example:

In this case, the researcher scored an H-index of 6, since he has 6 publications that have been cited at least 6 times. The remaining articles, or those that have not yet reached 6 citations, are left aside.

A good H-index score depends not only on a prolific output but also on a large number of citations by other authors. It is important, therefore, that your research reaches a wide audience, preferably one to whom your topic is particularly interesting or relevant, in a clear, high-quality text. Young researchers and inexperienced scholars often look for articles that offer academic security by leaving no room for doubts or misinterpretations.

What is a good H-Index score journal?

Journals also have their own H-Index scores. Publishing in a high H-index journal maximizes your chances of being cited by other authors and, consequently, may improve your own personal H-index score. Some of the “giants” in the highest H-index scores are journals from top universities, like Oxford University, with the highest score being 146, according to Google Scholar.

Knowing the H-index score of journals of interest is useful when searching for the right one to publish your next paper. Even if you are just starting as an author, and you still don’t have your own H-index score, you may want to start in the right place to skyrocket your self-worth.

See below some of the most commonly used databases that help authors find their H-index values:

  • Elsevier’s Scopus : Includes Citation Tracker, a feature that shows how often an author has been cited. To this day, it is the largest abstract and citation database of peer-reviewed literature.
  • Clarivate Analytics Web of Science : a digital platform that provides the H-index with its Citation Reports feature
  • Google Scholar : a growing database that calculates H-index scores for those who have a profile.

Maximize the impact of your research by publishing high-quality articles. A richly edited text with flawless grammar may be all you need to capture the eye of other authors and researchers in your field. With Elsevier, you have the guarantee of excellent output, no matter the topic or your target journal.

Language Editing Services by Elsevier Author Services:

What is a corresponding author?

What is a Corresponding Author?

Systematic review vs meta-analysis

  • Manuscript Review

Systematic Review VS Meta-Analysis

You may also like.

PowerPoint Presentation of Your Research Paper

How to Make a PowerPoint Presentation of Your Research Paper

What is a corresponding author?

How to Submit a Paper for Publication in a Journal

Input your search keywords and press Enter.

  • University of Michigan Library
  • Research Guides

Research Impact Assessment (Health Sciences)

  • Assessment Frameworks & Best Practices
  • Manage Your Research Identity
  • Article Indicators
  • Journal Indicators
  • Policy and Society
  • Alternative Indicators
  • Economic Indicators
  • Collaboration Indicators
  • Frequently Asked Questions
  • Michigan Experts
  • Online Learning Resources

Quick Links

  • Altmetric Explorer more... less... U-M subscribes to Altmetric Explorer for Institutions
  • Google Scholar @ U-M (with U-M Ann Arbor MGet It Links) This link opens in a new window
  • Journal Citation Reports This link opens in a new window
  • Michigan Experts This link opens in a new window
  • Open Researcher and Contributor ID (ORCID)
  • Scopus This link opens in a new window
  • Web of Science This link opens in a new window

Library Contact

Taubman Health Sciences Library logo

For more information or to schedule an individual or group consultation, contact the THL Research Impact Core.

[email protected]

What is the h-index?

Use of the h-index is controversial. Some organizations use the h-index for evaluating researchers while others do not use it. As information professionals, we do not advise using the h-index without fully understanding its limitations and caveats.

Use the h-index with extreme caution.

  • Article: An index to quantify an individual's scientific research output by J. E. Hirsch, 2005. "I propose the index h, defined as the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher."
  • Blog post: Halt the h-index by Sarah de Rijcke, Ludo Waltman, and Thed van Leeuwen, 2021. "Using the h-index in research evaluation? Rather not. But why not, actually? Why is using this indicator so problematic? And what are the alternatives anyway?"

Image of a graph showing the number of publications each year by an author, and listing the author's h-index for different platforms that calculate it.

Image: Screenshot of some metrics listed in an author profile in Michigan Experts. Includes the h-index from 4 different sources: Scopus, Dimensions, Web of Science, and Europe PMC.

  • This indicator typically varies by source (e.g., different values in Google Scholar, Scopus, and Web of Science).
  • It is not field-normalized and is not an accurate comparison of productivity across disciplines.
  • It is weighted positively towards mid and late-career researchers as publications have had more time to accrue citations.

There are several variations of the h-index, including:

  • i10-index A productivity indicator created by Google Scholar and used in Google's My Citations feature. It represents the number of publications with at least 10 citations.
  • g-index Created by Leo Egghe in 2006, the g-index gives more weight to authors' highly cited articles.

Where can I find my h-index?

The resources below contain author profiles which list an h-index. Remember, this metric typically varies by source, so an author's h-index in Scopus may be different than the one in Google Scholar.

  • Scopus 1. Once in Scopus, select "Authors" and perform search for your name. 2. Click on the correct name in the search results to view the full author profile, where the h-index is listed. 3. Click on "Analyze author output" for additional citation data.
  • Web of Science 1. Once in Web of Science, click on "Author Search" and search for your name. 2. Click on the correct name in the list of results to view the full author profile, including the h-index.
  • Google Scholar @ U-M 1. Search for the author or an article by the author. 2. On the search results page, click on the author's name to view their Google Scholar profile which includes the h-index. Note: not all authors have Google Scholar profiles; underlined author names indicate that a profile page exists.
  • Michigan Experts 1. In the bottom right, under "Useful Links" click on "Edit Your Michigan Experts Profile." 2. Log in with your U-M id and password. 3. Scroll down to the box called "H-Index" and h-index is listed there for the data sources that Michigan Experts uses.
  • Jump to Main navigation
  • Jump to main content
  • Jump to theme navigation
  • Jump to contact information
  • Norwegian website
  • Review and write
  • Share and publish
  • Open science
  • Where to publish
  • Submitting articles
  • Co – authorship
  • The Cristin system
  • Citation impact

h index of phd student

Whether a text is interesting and you get something out of it is more important than whether it is published somewhere important. (PhD candidate, humanities)

Hopefully, your PhD research will make an impact by advancing knowledge in your field or by contributing to real-world applications. While these kinds of impact are difficult to measure validly, more or less useful approximations of the degree of impact originate in data on how often and how broadly research is cited. There are, however, important aspects of research beyond those captured by citation-based metrics, and recent initiatives have spurred a growing interest in a broader and fairer basis for research assessment. On this page you will learn about

  • the Declaration on Research Assessment (DORA) and the recent movement to find fair and robust ways of evaluating research that do not rely on impact factors
  • how bibliometric indicators, such as journal rank (e.g. the impact factor) and h-index, are calculated
  • criticisms voiced against bibliometric impact measures and their application
  • the possible roles of citations in research and in research evaluations
  • possible implications of bibliometric indicators for your research and your career
  • how practising open science may improve your research impact

The Declaration on Research Assessment (DORA)

Evaluating research and researchers is not easy. While citation-based impact metrics, such as the journal impact factor, are convenient and have been popular, they have serious limitations and drawbacks as research assessment tools.

The Declaration on Research Assessment (DORA) is a global, cross-disciplinary initiative that embodies an awareness of the need to develop better methods for evaluating research and researchers. The number of signatories is growing, and it has been signed by the Research Council of Norway and a number of Norwegian research institutions. Several major research funders are also among its signatories (e.g. the Wellcome Trust ).

Signatories of DORA commit to not using journal-based metrics (such as the impact factor) in decisions regarding funding, hiring and promotion, to be explicit about the criteria used for making such decisions, to consider the value and impact of all research outputs (e.g. datasets and methods innovations), and to expand the range of impact measures to include such things as influence on policy and practice.

DORA represents an important development. Arguably, it implies that you may benefit from considering ways in which you can describe and provide documentation of any influence your work may have that are not captured by citation-based impact metrics. That said, citation based metrics continue to be important and to evolve. In the following sections, we will provide an introduction to some of them.

Journal rank

The most well-known measure of journal rank is the journal impact factor (often abbreviated to IF or JIF). It was developed in order to select the journals to be included in the Science Citation Index (Garfield, 2006). The impact factor is a measure of how often articles in a particular journal have been cited on average in a given year. The central idea is that the impact factor and similar measures of journal rank indicate the journal's relative influence among journals within the same subject category.

Calculating a journal's impact factor

A journal's impact factor is based on citation data from the Web of Science database , owned by Clarivate analytics. If your institution has purchased the appropriate licence, you may be able to look up a journal's impact factor and related statistics there. While using the impact factor in research evaluation is controversial (see Critical remarks , below), as a PhD candidate, you should know what it is, and how it is calculated.

The impact factor (IF) is the ratio of (A) the number of citations in the current year to items published in the previous two years to (B) the number of citable articles published in the same two years: IF=A/B.

General formula

Consider the journal Proceedings of the National Academy of Sciences (PNAS). This journal published a total of 6 620 citable articles in 2018-2019. In 2020, the total number of citations to articles from these two previous years was 74 177 (see table below).

Table 1: Citations and publications involved in the calculation of the impact factor for PNAS for 2020

Substituting in the general formula, we get at an impact factor for PNAS for 2020 of 11.2 (74 177 / 6 620 = 11.2).

CRITICAL REMARKS: IMPACT FACTOR

Over the years, criticism has been raised against the impact factor. You can read more about some of the critical remarks here.

No verifiable relation to quality The impact factor is associated with a journal's prestige, and is sometimes considered a proxy for the scientific quality of the work it publishes. Unfortunately, there is no verifiable association between journal impact factor and reasonable indicators of quality (for an overview of the relevant research, see Brembs, 2018).

Invalid measure of central tendency As can be seen from the formula and example calculation above, the journal impact factor is, roughly, a mean. Means are sensible indicators of central tendency if the distribution of values is symmetrical. However, citations to scholarly articles are not symmetrically distributed. Most published articles receive few, or even no, citations, while a small number of articles become very highly cited. This skewness means that a journal's impact factor is a poor predictor of the citation count of any given single article published in that journal (Seglen, 1997; Zhang, Rousseau & Sivertsen, 2017).

Field dependency Because the impact factor is field dependent (due to different citation conventions), only journals within the same scientific field are comparable. Nevertheless, the impact factor is sometimes inappropriately used to compare journals from different fields (Adler, Ewing & Taylor, 2009).

Anglo-American bias The pool of selected journals has a strong Anglo-American bias. Influential journals written in other languages are rarely captured by Clarivate Analytics Journal Citation Reports.

Unintended use The impact factor is not only used for ranking journals according to their relative influence, as initially intended, but also for measuring the performance of individual researchers. Given the skewness of citation distributions described above, this is a misapplication. The use of the impact factor when applied to individual researchers has been criticised by a broad scholarly community, not least the co-creator of the Science Citation Index, Eugene Garfield, himself.

Typically, when the author's bibliography is examined, a journal's impact factor is substituted for the actual citation count. Thus, use of the impact factor to weight the influence of a paper amounts to a prediction, albeit coloured by probabilities. (Garfield, 1999)

Manipulation The impact factor can be manipulated. Editors may influence the value of their journal's impact factor by writing editorials containing references to articles in their journal (journal self-citations). In addition, references given in the editorial count to the numerator, while editorials do not count towards the denominator. By definition the denominator only consists of citable articles and editorials are not regarded as such.

Incomplete references References given in the articles may be incomplete and incorrect. Incorrect references are not corrected automatically and therefore are not added to the citations. This fact influences the value of the impact factor and other citation indicators such as the h-index.

Alternative journal indicators

In order to compensate for some of the weaknesses of the impact factor (field dependency, inclusion of self-citations, length of citation window, quality of citations), efforts have been undertaken to develop better journal indicators. More advanced metrics are usually based on network analysis, such as the SCImago Journal Rank (SJR) and the Source Normalised Impact per Paper (SNIP) , both based on data from Scopus. While such measures arguably do a better job of ranking journals, they are still only applicable to journals and should not be used to evaluate research output at the level of individual researchers. For that purpose, the h-index, introduced below, is better suited.

The h-index

The h -index is a measure of the total, citation-based impact of a researcher. It combines scientific production (number of publications) and impact (number of citations).

  • The h -index is the largest number h , such that the author has at least h publications that each have been cited h times.

When exploring the literature of your research field, the h -index may give you an idea of the impact of individual researchers and research groups. You can retrieve the h -index from e.g. Web of Science, Scopus and Google Scholar. Some institutions or agencies may expect you to state your h -index when applying for a scholarship, project funding or a job. If you do, you should state from which source you retrieved it.

CALCULATING SOMEONE’S H-INDEX

In the table below, an author's publications (labelled a through j) have been rank ordered according to their number of citations.

Table 2: Publications sorted by decreasing number of citations.

Counting from left to right, we reach four publications (c, i, a, and g) before the number of citations becomes less than the count. This author's h -index is, therefore, equal to 4.

Naturally, there is no need to actually calculate someone's h -index for yourself in this manner. It can easily be retrieved from various sources, as explained below.

EXAMPLE: H-INDEX RETRIEVED IN WEB OF SCIENCE, SCOPUS AND GOOGLE SCHOLAR

To retrieve an author's h -index, you need to search for that author, and then identify the relevant link or menu option. It is labelled and located slightly differently in the three platforms in this example, and also changes from time to time, but is typically not very hard to locate.

In this example, we use a renowned Norwegian researcher in ecology and evolutionary biology: Nils C. Stenseth. We demonstrate that his h -index is different in each of the databases due to their different coverage of content. (The statistics were retrieved in January of 2022.)

In Web of Science, Stenseth is registered with a total of 514 publications, and is listed with an h -index of 89. In Scopus 697 of his publications are indexed, and he is listed with an h -index of 93. Google Scholar, covering a wider range of sources, yields the highest h -index value of 121.

Note that all of the platforms that offer citation related statistics have different ways of sourcing and indexing publications (e.g., for accounting for author name variations) and offer different alternative statistics (e.g., h -index without self-citations or the i10-index). You may need to account for this when looking up and interpreting such statistics.

CRITICAL REMARKS: H-INDEX

Using a three-year citation window we find that 36% of all citations represent author self-citations. However, this percentage decreases when citations are traced for longer periods. We find the highest share of self-citations among the least cited papers. (Aksnes, 2003)

The h-index alone does not give a complete picture of the performance of an individual researcher or research group. The h-index underrepresents the impact of the most cited publications and does not consider the long tail of rarely cited publications. In particular, the h-index cannot exceed the total number of publications of a researcher. The impact of researchers with a short scientific career may be underestimated and their potential undiscovered. Read more about this below: 'Problem: The Matthew effect in science' .

  • The h-index is comparable only for authors working in the same field.
  • The h-index is comparable for authors of the same scientific age.
  • The h-index differs between databases, depending on the coverage in the individual database.
  • The h-index depends on your institution's subscription time range. The h-index may underestimate researchers’ impact if their older publications are not included.
  • The h-index is manipulable. Exaggerated use of self-citations may influence the h-index and result in an inflated value.

Citations in communication

Citing is an activity maintaining intellectual traditions in scientific communication. Usually, citations and references provide peer recognition; when you use others' work by citing that work, you give credit to its creator. Citations are used for reasons of dialogue and express participation in an academic debate. They are aids to persuasion; assumed authoritative documents are selected to underpin further research. However, citations may be motivated by other reasons as well.

Citations may also express

  • criticism of earlier research
  • friendship to support colleagues
  • payment of intellectual debt, e.g. toward supervisors or collaborators
  • self-marketing one's own research, i.e. self-citations

Citations and evaluation

Applicable across fields? Note that scholarly communication varies from field to field. Comparisons across different fields are therefore problematic. However, there are attempts to make citation indicators field independent. For example, The Times Higher Education World University Rankings involve citation indicators which are field independent, i.e. normalised (Times Higher Education, 2013).

Citations are basic units measuring research output. Citations are regarded as an objective (or at least less subjective) measure to determine impact, i.e. influence and importance. They are used in addition to, or as a substitute for peer judgments.

There is a strong correlation between peer judgments and citation frequencies. For this reason, citations are relied on as indicators of quality and are used for e.g.

  • benchmarking universities
  • scholarship and employment decisions
  • political decisions regarding research funding
  • exploring research fields and identifying influential works and research trends

Citations must be handled carefully when evaluating research. Citation data vary from database to database, depending on the coverage of content of the database. Furthermore, two problematic factors are different motivations for citing, and the the considerable skewness of the distribution of citations.

PROBLEM: THE MATTHEW EFFECT IN SCIENCE

To those who have, shall be given...

When sorting a set of publications by the numbers of citations received, the distribution shows a typical exponential or skewed pattern. Works which have been cited are more visible and are more easily cited again (vertical tail in figure), while other works remain hidden and are hardly ever cited (horizontal tail in figure). This phenomenon is referred to as the Matthew effect in science.

Citation pattern

What is the problem with skewed distributions? Skewed patterns make it difficult to determine an average citation count. Different approaches may be applied, see the figure.

  • Mean citation count: Long vertical or horizontal tails distort the average value. The impact factor is an example of this type of average.
  • Citation median: Long vertical or horizontal tails distort the median value. For example, a long tail of rarely cited publications results in a low median value, while a minority of highly cited publications is ignored.
  • H-index: The h-index is an alternative average value. It is designed to compensate for the effect of long tails.

Improve your impact

Good research + high visibility = Your best chance to make an impact 

Being aware of how academic performance is evaluated allows you to make informed decisions and devise strategies to build and document your impact, and thereby improve your career prospects. Our general advice centres on making your work visible , accessible and understandable .

Make your work visible to other researchers:

  • Publish with international publishers/journals that are known and read in your field. Your work is then more likely to be cited by your peers.
  • Share your work on social networks (beware of copyright issues). Impact measures based on usage and social network mentions are emerging and available on many websites, e.g. on journal and database websites. Work that has been shared and spread is more likely to get cited.
  • Engage and participate in scholarly debates in society, e.g. in the press. News coverage may also increase scholarly attention.
  • Showcase your work: - Create your scholarly online identity. People with common names are difficult to distinguish. To avoid ambiguity create your profile (e.g. ORCID , Google Scholar) and link your publications to your profile. - Cite your previous work in order to give a broader picture of your research. However, do not overdo this, and make sure that you always stay in line with the topic discussed.
  • When looking for a publishing venue, consider journals (or publishers) that are indexed in databases used in your field; databases increase the visibility of your work.
  • Make sure your work is added to the research register at your institution via Cristin . Usually these data are used for performance measures, so state your name and institutional affiliation on your publications in a correct and consistent manner.
  • Collaborate with other researchers. In general, collaboration can benefit your career by increasing your production. Co-publishing may also imply borrowing status from more renowned co-authors who are read and cited regularly.

Make your work accessible to other researchers by adopting open science practices:

  • Post your work in repositories . If you have published in a subscription journal, archiving a version of your manuscript in an open repository will make it markedly more accessible.
  • Publish open access . Openly available articles or books are easily spread and cited. Publishing open access is encouraged and supported at many institutions, and mandated by many funding bodies.
  • Share your research data along with your publication. This strengthens your research and makes your findings replicable and verifiable.
  • Making your publications and your data openly available (as far as possible), is likely to increase your chances of having your work cited (McKiernan et al., 2016).

Make your work understandable to other researchers:

  • Use informative, memorable descriptions including key words in the title and abstract.
  • Place your findings in a larger context by citing the work of other researchers in your field.
  • Publish in English for a wider international distribution. If you have written in your native language, consider republishing it in English, or publish a summary of your main findings in an international journal.

Adler, R., Ewing, J., & Taylor, P. (2009). Citation statistics. Statistical Science, 24 (1), 1-14. Aksnes, D. W. (2003). A macro study of self-citation. Scientometrics , 56 (2), 235-246. https://doi.org/10.1023/A:1021919228368 Brembs, B. (2018). Prestigious science journals struggle to reach even average reliability. Frontiers in Human Neuroscience , 12 , Article 37. https://doi.org/10.3389/fnhum.2018.00037 Garfield, E. (1999). Journal impact factor: A brief review.  Canadian Medical Association Journal , 161 , 979-980. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1230709/ Garfield, E. (2006). The history and meaning of the journal impact factor. JAMA, 295 , 90-93. https://doi.org/10.1001/jama.295.1.90 Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences , 102 (46), 16569-16572. https://doi.org/10.1073/pnas.0507655102 McKiernan, E. C., Bourne, P. E., Brown, C. T., Buck, S., Kenall, A., Lin, J., . . . Yarkoni, T. (2016). How open science helps researchers succeed. eLife , 5 , Article e16800. https://doi.org/10.7554/eLife.16800 Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. BMJ , 314 (7079), 498-502. https://doi.org/10.1136/bmj.314.7079.497 Zhang, L., Rousseau, R., & Sivertsen, G. (2017). Science deserves to be judged by its contents, not by its wrapping: Revisiting Seglen's work on journal impact and research evaluation. PLoS ONE , 12 (3), Article e0174205. https://doi.org/10.1371/journal.pone.0174205

  • Types of reviews
  • Search techniques
  • The Dissertation
  • Open access publishing
  • Open archives
  • Research data
  • Data management
  • Sensitive data
  • Preregistration

NHH

PhD on Track

  • Manage document

Enago Academy

How to Successfully Boost Your H-Index

' src=

How successful are you as a researcher? How good are your publications? What is the impact of your work?

There are, of course, many different ways to answer these questions. Traditionally, the strength of a researcher’s career has often been judged on their publications. Publications in a journal with a high impact factor are considered to be the most prestigious. The impact factor of a journal is based on the average number of citations that articles in the journal receive.

But is there a way to measure the success of an individual, rather than a journal or institution? The h-index offers one way to do this.

What is the H-Index?

Simply put, the h-index is a number that gives an idea of a researcher’s individual productivity and influence. The number is based on the papers a researcher publishes and the citations those papers get.

Publishing a lot of highly cited articles will increase your h-index. On the other hand, getting a lot of citations on only one or two papers will not give you a high h-index. For example, if you have an h-index of 7, it means that you have published 7 papers, each of which has been cited at least 7 times. If you are a “one-hit wonder”, with only one paper that has a lot of citations, this will be shown in a low h-index.

How to Calculate Your H-Index

Not sure how to calculate your h-index? Don’t worry, there are several online tools that can help you. For example, you can use Google Scholar’s Citation search to track your citations or even Scopus as it offers similar functions . You can find some helpful hints and tips on how to use both these services here . You could also use any other service that tracks citations, such as Web of Science.

In Scopus, for example, you simply need to search for the author’s name. Then just click on the correct search result to see details including the h-index.

Pros and Cons of the H-Index

As with all journal metrics, there are pros and cons to representing a researcher’s achievements as a number. You might have already spotted some of these yourself.

The h-index offers a useful (and simple) way to compare researchers who are at similar stages in their careers. This could be particularly useful for recruiters or others who are not from the same field as the researcher.

Using the h-index would stop a researcher who has lots of low-impact publications appearing to be more productive than someone who has fewer publications with a higher impact. This leads us to the second pro: h-index is currently the only way to combine productivity and impact in a single metric.

One problem with the h-index is that is cannot be used to compare researchers in different fields. It is also open to being unfairly manipulated. For example, you might get a group of researchers who often cite each other’s work simply to increase their h-index.

The h-index does not tell you whether a researcher was a sole author on a paper, or one of a huge group. This means that someone’s h-index could be boosted by work they had little to do with.

Finally, a big drawback of the h-index is that it tells you nothing about the science or ideas behind a researcher’s achievements.

Should You Care About Your H-Index?

The h-index is not perfect. So, should you really care about yours?

The answer is probably, yes. These days, it is common for academic jobs to require a certain h-index. As most researchers know, competition for academic jobs, particularly post docs, is very high. Giving a minimum h-index is a simple way to eliminate job seekers who won’t make the grade.

A strong h-index could also help you get promoted, gain memberships to scientific organizations, or win research grants. It helps people who are not experts in your field to understand what you have achieved.

But what can you do if your h-index is not looking great?

Boosting Your H-Index

There are many ways to boost your h-index. Some of these are either illegal or immoral. For example, frequent/irrelevant self-citations will boost your h-index, but does not actually have value.

Fortunately, there are several acceptable ways to boost your h-index.

  • Collaborate with more mature researchers. Research has shown that papers with famous first authors get more citations. So, if you are just starting out, try to collaborate with the most experienced researchers in your field.
  • Choose your journal carefully. Well-known, established journals get more readers, which leads to more citations. Try to think about which journal to aim for early in the research process.
  • Publish Open Access. As you might expect, open access journals get more citations. However, whether you choose open access partly depends on your field. In life sciences and medicine, for example, there are some very well-respected open access journals. In other areas, this is less true.
  • Think about your audience. When choosing a journal, consider its audience. Does the journal have a broad or narrow scope? A more specialist journal might be more likely to get citations from researchers in your own field.
  • Network, network, network. Attend conferences and meetings whenever you can. This will help you to promote your work and find new collaborators.
  • Work on your writing. Most readers will find your article with a search engine. You might want to learn about search engine optimization . Make sure your article has a catchy but specific title. Think about what your keywords will be.
  • Show up on social media. If you don’t already, think about writing a blog about your work. Being present on social media will help to make other researchers aware of you and your work. It will also help you to connect with others in your field.

You might have noticed that most of these tips won’t just help to boost your h-index: they could actually help with all parts of your career.

Do you know your h-index? Have you ever been asked to provide it in a job application or grant proposal? Share your thoughts in the comments below.

' src=

I’m new in the field of research though I have published 3 research journals but those journals were not indexed When I started to explore indexed journals they were expensive or/and take a long time to publish the articles. So my question is how can I increase my publications in an indexed journal?

Rate this article Cancel Reply

Your email address will not be published.

h index of phd student

Enago Academy's Most Popular Articles

H-index

  • Career Corner
  • PhDs & Postdocs

H-index — Is it Enough to Score Scientific Excellence?

Publishing your work is great for your career and future funding opportunities. It will increase…

Research Promotion

  • Free Resources

Increase Your Visibility Through Digital Networking

Digitalization has changed the landscape of scientific research. Today, researchers can unravel several possibilities just…

h index of phd student

  • Promoting Research
  • Using Social Networks

LinkedIn Tips for PhDs: 4 Hacks to Get You Hired

LinkedIn is a professional social network where users can post their resumes and accomplishments to…

  • Using Online Media

Can Twitter Raise Your Research Profile?

In the past decade, the usage of the term “social media” seems to have increased…

h index of phd student

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

  • Skip to primary navigation
  • Skip to content

Avidnote

  • Home – AI for Research

Avidnote

What is a good h-index?

This article discusses what’s considered a good h-index. If you’re interested in a tool that helps you when improving your h-index, check out Avidnote which is a useful app to help you write better research notes.

It is not an easy task to quantitatively determine what makes one scholar more prolific than another, or compare between the bodies of work of different researchers. Although comparisons between researchers is difficult, it is nonetheless important when allocating research grants or deciding on who to recruit or to promote to an academic position. One way to determine the level of output of a researcher is by using the h-index. Since it was introduced many years ago, it has become widely used because of its unique properties.

What is the h-index? The h-index is a measure of a researcher’s productivity (number of publications) and impact (number of citations generated by the publications). It takes into account both the quantity of work, in terms of how many publications the author has generated, as well as the “ quality ” of those papers, in terms of how many citations it has obtained (assuming that accumulating a lot of citations is the mark of quality, which is not necessarily true, but in lack of other metrics, it’s used as a proxy to measure quality).

The index was introduced in 2005 by Jorge Hirsh, a physicist at the University of San Diego, California in a paper titled “An index to quantify an individual’s scientific research output.” Hirsh defined the index as “the number of papers with citation number ≥h [the ranking position where the number of publications is greater than or equal to the number of citations]” and saw it as a “useful index to characterize the scientific output of a researcher.”

According to Hirsh

“The publication record of an individual and the citation record clearly are data that contain useful information. That information includes the number (Np) of papers published over n years, the number of citations (Nc j ) for each paper (j), the journals where the papers were published, their impact parameter, etc. This large amount of information will be evaluated with different criteria by different people. Here, I would like to propose a single number, the ‘‘h index,’’ as a particularly simple and useful way to characterize the scientific output of a researcher. A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np – h) papers have ≤h citations each.”

He further notes that

“For the few scientists who earn a Nobel prize, the impact and relevance of their research is unquestionable. Among the rest of us, how does one quantify the cumulative impact and relevance of an individual’s scientific research output? In a world of limited resources, such quantification (even if potentially distasteful) is often needed for evaluation and comparison purposes (e.g., for university faculty recruitment and advancement, award of grants, etc.).”

Practical example

Consider a scientist that has published 5 papers with the following citations:

• Publication 1 = 11 citations • Publication 2 = 9 citations • Publication 3 = 7 citations • Publication 4 = 4 citations • Publication 5 = 3 citations

In the above example, the scientist’s h index is = 4 [the ranking position where the number of publications is equal to the number of citations]. Hence, paper 5 [or the fifth position in the ranking] is not relevant since the number of citations [3] is not greater than or equal to h[5]. Note that the citations are always arranged in descending numerical order. Thus, a scholar with 9 publications that have been cited at least 9 times each has an h-index of 9.

Because of its simplicity and logic, the h-index has become increasingly popular over the years. It is now a quick and accessible metric that researchers can use to track their scientific progress or impact. Different databases (e.g., Google Scholar, Scopus, Web of Science, etc) have specific ways of calculating the h-index. Determining a good h-index

Due to some other variables, it is not possible to precisely determine what constitutes a standard or uniform good index. This is because publication and citation dynamics vary across the various fields of learning. There are also similar variations among funding and recruitment organizations that may consider factors like educational level, position of the applicant as well as type and size of the research project to be funded.

So what is a “good” h-index?

Here, it could be useful to refer to a study (Am J Clin Pathol 2019;151:286-91) that found that, on average, assistant professors have an h-index of 2-5, associate professors 6-10, and full professors 12-24. The statistical variance in the data set was quite large so the averages should be taken with a grain of salt. The same study found that if you are aiming for a Nobel Prize, your h-index needs to be at least 35 and preferably much closer to 70.

Note that the above figures are not fixed and conditions can differ significantly according to discipline (as noted above). For instance, there are cases where full professors, faculty deans, and vice-chancellors have very low h-index scores, while brilliant young researchers still pursuing PhDs have scores of between 10 and 15. The figures will also vary significantly depending on the research area.

Graph shows average citations per discipline.  Life science averages 6 per article whereas Mathematics averages 1 per paper.

One rule that is widely accepted, however, is that an h-index score should at least be equal to the number of years a scholar has put into his or her work. This rule was prescribed by Hirsch who recommended an h-index of at least 20 after working for the same number of years. One notable observation from Hirsch’s paper is that accomplished scientists usually accumulate high h-index scores. This is indicated by the statistics in his 2005 paper which show that 84% of Nobel laureates in physics had an index score of at least 30.

Limitations of the h-index

One significant pro of the h-index is its combination of productivity (number of papers published) and impact (number of citations) to derive the score. The implication of this is that neither several publications with a few (or no) citations nor a few heavily cited publications will lead to a high h-index. However, despite its strengths, the index has a number of weaknesses or limitations. Some of them are enumerated below.

• Due to incomplete coverage or varying degrees of coverage of a scholar’s works by the different bibliometric databases, the scholar’s h-index can vary significantly from one database to another. For example, some databases do not provide sufficient coverage for publications in foreign languages. • It fails to consider the number of authors of a particular publication. The argument here is that a paper with 70 citations written by one author should attract extra plaudits than one with the same number of citations written by say, 7 authors. • Because of its direct relationship with time, the index tends to favour writers who are at the middle or terminal stages of their working life. For instance, assuming Albert Einstein died in early 1906, he would still have been regarded as a very influential physicist but his h-index would have been lower (maybe 4 or 5). He presently has an h-index of 119, according to Google scholar. • Authors working in fields characterized by fast rates of publication, heavily cited/referenced papers, and multiple authorship tend to be more favoured.

• Review articles often exert a more significant impact on the h-index than original papers because they are usually cited more frequently.

Due to the limitations of the h-index, some variations have emerged over the years. These include m-index, contemporary h-index, and i10 index. However, these other measures are not as widely applied as Hirsch’s index.

Even with its limitations and skepticism amongst some scholars, metrics like the h-index remain important considerations among decision-makers that determine things like awarding research grants and recruiting staff. The h-index has continued to expand in usage since it was introduced by Jorge Hirsch in 2005. However, it is impossible to determine what constitutes a good h-index due to a number of factors such as differing publication and citation conditions across the various fields of knowledge, among others.

Write better Research Notes

Avidnote is the new app for writing and organizing your research notes. Become a more effective researcher.

h index of phd student

Related posts

h index of phd student

Maximize Organization with Tags in Avidnote

Maximize organization and efficiency in note-taking with Avidnote’s robust tagging system. Streamline workflows and improve productivity. Visit Avidnote!

h index of phd student

Streamline Your Research with Reference Management Integration

Discover how integrating reference management tools streamlines your research process, saving time and ensuring accurate citations. Elevate your scholarly endeavors.

Leave a comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Privacy Overview

Adding {{itemName}} to cart

Added {{itemName}} to cart

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Welcome to the Purdue Online Writing Lab

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

The Online Writing Lab at Purdue University houses writing resources and instructional material, and we provide these as a free service of the Writing Lab at Purdue. Students, members of the community, and users worldwide will find information to assist with many writing projects. Teachers and trainers may use this material for in-class and out-of-class instruction.

The Purdue On-Campus Writing Lab and Purdue Online Writing Lab assist clients in their development as writers—no matter what their skill level—with on-campus consultations, online participation, and community engagement. The Purdue Writing Lab serves the Purdue, West Lafayette, campus and coordinates with local literacy initiatives. The Purdue OWL offers global support through online reference materials and services.

A Message From the Assistant Director of Content Development 

The Purdue OWL® is committed to supporting  students, instructors, and writers by offering a wide range of resources that are developed and revised with them in mind. To do this, the OWL team is always exploring possibilties for a better design, allowing accessibility and user experience to guide our process. As the OWL undergoes some changes, we welcome your feedback and suggestions by email at any time.

Please don't hesitate to contact us via our contact page  if you have any questions or comments.

All the best,

Social Media

Facebook twitter.

Learn to Change the World

Nonie Lesaux

Nonie Lesaux Named HGSE Interim Dean

Professor of education and former academic dean will begin her role at the end of the academic year

Howard Gardner

Howard Gardner Named 2024 Convocation Speaker

Celebrated psychologist and originator of the theory of multiple intelligences will address HGSE graduates on May 22

FAFSA Illustration

Can School Counselors Help Students with "FAFSA Fiasco"?

Support for low-income prospective college students and their families more crucial than ever during troubled federal financial aid rollout   

The front of Gutman Library proudly displays welcome to HGSE banners.

A Place to Thrive

Explore how you can connect, grow, deepen your work, and expand your horizons at the Harvard Graduate School of Education.

Degree Programs

Through a rich suite of courses and co-curricular experiences, along with the mentorship of exceptional faculty, a degree from Harvard Graduate School of Education prepares you to make a difference in education today.

Fernando Reimers Teaching

Residential Master’s in Education

Immersive campus experience for aspiring and established educators, leaders, and innovators, with five distinct programs to choose from and rich opportunities to personalize your study and deepen your interests.

Online Master's in Education Leadership

Part-time, career-embedded program, delivered online, for experienced educators looking to advance their leadership in higher education or pre-K–12.

Doctor of Education Leadership

Preparing transformative leaders to have the capacity to guide complex organizations, navigate political environments, and create systemic change in the field of education.

Doctor of Philosophy in Education 

Training cutting-edge researchers who work across disciplines, generate knowledge, and translate discoveries into transformative policy and practice.

PPE Student

Professional Development

For early childhood professionals.

Programs designed to support the learning and development of early childhood professionals working in diverse settings.

For K-12 Professionals

A robust portfolio of programs serving teachers, school leaders, district administrators, and other education professionals.

For Higher Education Professionals

Leadership and career development programs for college and university administrators.

Ideas and Impact

From world-class research to innovative ideas, our community of students, faculty, and alumni are transforming education today.

Longfellow

Royal, Wippman Named Presidents-in-Residence

In its first year as a fully endowed program, the Judith Block McLaughlin Presidents-in-Residence program welcomes two new members.

Absenteeism illustration

Um... Where Is Everybody?

Families may be the key to ending chronic absenteeism, a pandemic-era problem that has only gotten worse

James Kim

Phase Two: The Reach

Reach Every Reader on its impact and the project’s next phase

Faculty in the Media

With deep knowledge of the education field, HGSE faculty members influence current conversations in the media, giving educators and students a much-needed voice for positive change.

Nadine Gaab

"Every child has the right to read well. Every child has the right to access their full potential. This society is driven by perfectionism and has been very narrow-minded when it comes to children who learn differently, including learning disabilities."
  • Skip to Content
  • Skip to Main Navigation
  • Skip to Search

h index of phd student

Indiana University Indianapolis Indiana University Indianapolis IU Indianapolis

Open Search

  • How to Apply
  • Cost & Financial Aid
  • Housing & Community
  • Parents & Families
  • Request Info
  • B.S. Exercise Science
  • B.S. Fitness Management & Personal Training
  • B.S. Physical Education Teacher Education
  • M.S. Kinesiology

Ph.D. Exercise Science

  • Certificates & Minors
  • B.S. Health Sciences
  • M.S. Health Sciences
  • Dual B.S. & M.S. Health Sciences
  • Dual B.S. & M.S. Health Informatics
  • Ph.D. Health & Rehabilitation Sciences
  • Non-thesis Track
  • M.S. + Dietetic Internship
  • Doctorate in Nutrition & Dietetics
  • Postprofessional DND
  • Undergraduate Nutrition Certificate
  • Doctor of Occupational Therapy
  • Postprofessional OTD
  • Doctor of Physical Therapy
  • Dual DPT & Ph.D.
  • Physician Assistant Studies
  • B.S. Tourism, Conventions, & Event Management
  • B.S. Sports Management
  • Dual B.S. & M.S. Sports Analytics
  • Student Research Opportunities
  • Service Learning Programs
  • Peer Advisors
  • Career Development
  • Health Sciences Internships
  • Kinesiology Internships
  • Tourism, Event, & Sport Management
  • Honors Program
  • Scholarships & Financial Aid
  • Ambassadors
  • Wellness Resources
  • Camp Brosius
  • HLSC, KINE, & TESM Courses
  • Internship Courses
  • Focus Rooms
  • Apply to graduate
  • Major, Minor, or Certificate Declaration
  • Ways to Make a Gift
  • Recurring Gifts
  • Share Your Success
  • Connect with Us
  • News & Events
  • Faculty & Staff Directory

School of Health & Human Sciences

A female student conducting a study

  • Exercise & Kinesiology

Become an expert in the exercise science field

Our exercise science Ph.D. program will empower you to become a leading authority whether your career trajectory involves academia, private industry, research, or the government.

This is a full time, face-to-face, research-based doctoral program that includes 90 credit hours of graduate study taught by nationally and internationally recognized faculty.

You will receive training through a rigorous, mentor-based interdisciplinary curriculum with pedagogical and research experiences. You’ll also conduct applied and translational research focused on exercise science, which will help you enhance and prolong the quality of human life.

Meet the faculty from this area

Explore the curriculum

Calculate program costs

  • Learn how to apply

Teaching and research assistantships

A limited number of teaching assistantships are available, and research assistantships are available depending on grant funding. Assistantships include a tuition waiver, health insurance, and stipend. Research assistantships are required of Ph.D. students.

Find research mentors

Monica Teegardin

The exercise science Ph.D. program provides ample opportunity to not only enhance one’s knowledge but also gain experience teaching and engage in research. The faculty within the program are phenomenal and always willing to help. Monica Teegardin, Ph.D. Student, Kinesiology

9% Expected growth rate of exercise physiologists from 2021-2031

$90 K Annual mean wage of doctorate exercise physiologists in the U.S., 2023

Take the next step

  • Connect with an advisor

School of Health & Human Sciences resources and social media channels

  • Faculty & Staff Intranet
  • Skip to Content
  • Skip to Main Navigation
  • Skip to Search

h index of phd student

Indiana University Indianapolis Indiana University Indianapolis IU Indianapolis

Open Search

  • About Paul H. O'Neill
  • Mission, Vision, and Values
  • Resources & Services
  • Rankings & Statistics
  • Indianapolis & Bloomington
  • Undergraduate
  • Credit Hour Awards
  • International
  • Transfer to O'Neill
  • Undergraduate Scholarships
  • Graduate Scholarships and Fellowships
  • Schedule a Visit
  • Your Undergraduate Journey
  • Criminal Justice
  • Management & Civic Leadership
  • Public Policy
  • Public Safety Management
  • Sustainability
  • Minors & Certificates
  • Accelerated Master's
  • Environmental Policy & Sustainability
  • Homeland Security & Emergency Management
  • Innovation & Social Change

Nonprofit Management

Policy Analysis

Public Management

  • Urban & Regional Governance
  • Master of Science in Criminal Justice & Public Safety
  • Certificates
  • Request Information
  • Ph.D. Minors
  • Capstone Classes
  • Honors Program
  • Executive Master of Public Affairs
  • Executive Certificate in Public Management
  • IHA Management Institute
  • Nonprofit Leadership Academy
  • Certificate in Nonprofit Executive Leadership
  • Holistic Leadership Series
  • Pivot with Courage
  • Consulting Services
  • Our Clients
  • Program Calendar
  • Job Listings
  • Internships
  • Information for Employers
  • Career Events
  • Student Organizations
  • Washington Leadership Program
  • Study Abroad
  • LEAD Program
  • Activate O'Neill
  • Criminology & Criminal Justice
  • Public Management & Governance
  • Faculty Directory
  • Faculty Publications
  • Public Policy Institute
  • Community Engagement
  • O'Neill Advisor Appointment
  • Take Classes at Another IU Campus
  • Giving to the O'Neill School
  • Get Involved
  • People Directory

Paul H. O’Neill School of Public and Environmental Affairs

  • Student Portal
  • Alumni & Giving
  • Graduate Degrees

Master of Public Affairs

Earn an mpa from the o'neill school.

The Master of Public Affairs (MPA) program at the O’Neill School at IUPUI will advance your understanding of public, private, and nonprofit organizations that serve the public interest.

Our 39 credit hour MPA emphasizes professional practice across disciplines, preparing students for responsible and informed legal, management, financial, and budgetary leadership roles in the public and nonprofit sector.

Badge from U.S. News and World Report.

Accreditation

The O’Neill MPA program at IUPUI is fully accredited by the National Association of Schools of Public Affairs and Administration (NASPAA) .

Degree requirements

The core requirements of the MPA prepare you to enter or continue work in public service, no matter your area of focus. Each student must also complete the requirements of one concentration  and two or three electives depending on the program of study.

  • MPA core (15 credit hours)
  • Concentration requirement (18 credit hours, 15 for Nonprofit Management)
  • Electives (6 credit hours, 9 for Nonprofit Management)

68% OF O'NEILL MPA STUDENTS ARE PART-TIME

Attend full time or part time

O'neill indianapolis graduate program rankings.

#39 MPA ranking out of 271 schools

#2 Nonprofit Management

#12 Environmental Policy & Management

More rankings and stats

O'Neill School MPA Concentrations

Environmental Policy and Sustainability

Homeland Security and Emergency Management

Innovation and Social Change

Urban and Regional Governance

Dual degrees

Master of public affairs-doctor of jurisprudence (mpa-jd).

Offered with the Robert H. McKinney School of Law, this program provides an understanding of the legal and managerial framework of public service, nonprofit, and quasi-governmental agencies.

Master of Public Affairs-Master of Arts in Philanthropic Studies (MPA-MA)

Offered with the IU Lilly Family School of Philanthropy, this program focuses on the history, culture, and values of philanthropy, as well as the managerial framework of public service.

Real-life, hands-on projects—not case studies

A graduate student discusses her Capstone project.

Capstone classes tackle projects that promote positive change in the community. Each class starts with a variety of local agencies that pitch their needs to O’Neill MPA students, who work on the projects in groups.

O'Neill MPA graduate employment (2022-23 graduates)

56% of graduates are employed in the government sector

21% of graduates are employed in the nonprofit sector

19% of graduates are employed in the private sector

Employment data is based on 88 program graduates. The employment sector of 4 graduates is unknown.

O'neill mpa graduation rates (fall 2017 & spring 2018 cohorts).

76% Graduated after 6 semesters

84% Graduated after 8 semesters

92% Graduated after 10 semesters

Graduation data is based on 25 students who initially enrolled.

O'neill mpa interns and alumni work for:.

  • Alzheimer’s Association of Greater Indiana
  • Anthem Blue Cross Blue Shield
  • City of Indianapolis
  • Cleveland Orchestra
  • Environmental Protection Agency
  • Finish Line Youth Foundation
  • Indiana Attorney General ’ s Office
  • Indiana Department of Commerce
  • Indiana Department of Education
  • Indiana State House
  • Indiana University Foundation
  • Indianapolis Downtown, Inc.
  • Iowa League of Cities
  • Lutheran Child and Family Services
  • The Nature Conservancy
  • Office of Medicaid Policy and Planning
  • Purdue University
  • United Way of Central Indiana
  • U.S. District Court

Executive MPA for employers

The O'Neill School offers a 39 credit hour Executive Master of Public Affairs program through the offices of Executive Education. The cohort-based course of study is designed for employers and can be delivered at IUPUI or your company's location. Offered with a management concentration, the Executive MPA prepares employees to manage public agencies and nonprofit organizations.

Crane Naval Surface Warfare Center has partnered with Executive Education to offer the EMPA to its employees for the past 20 years. Read more about the partnership on the O'Neill blog.

Social dreamers. Tangible skill sets.

  • Apply to O'Neill Indianapolis
  • Browse degree programs
  • Read the blog

Paul H. O’Neill School of Public and Environmental Affairs social media channels

  • International

live news

Trump's hush money trial

live news

Israel-Hamas war

May 1, 2024 - US campus protests

By Elizabeth Wolfe, Kathleen Magramo, Dalia Faheid, Antoinette Radford, Emma Tucker, Anna Cooban, Rachel Ramirez, Aditi Sangal, Elise Hammond, Maureen Chowdhury, Lauren Mascarenhas, Chandelis Duster and Tori B. Powell, CNN

Our live coverage of the protests at US colleges has moved here

USC reopens campus to school community after closing due to protesters unaffiliated with university

From CNN's Taylor Romine

The University of Southern California reopened its campus to the school community Wednesday night after temporarily closing because "demonstrators unaffiliated with USC" were protesting next to the campus, the school said.

The protesters were gathered at the intersection of Jefferson Boulevard and Figueroa Street, the school said in a post at around 8 p.m. It was not clear what they were protesting. 

Shortly after 9 p.m., the school said the demonstrators had left the area and the campus was reopened to "students, staff, faculty, and registered guests."

UCLA police tell people to leave encampment over loudspeaker

UCLA police over loudspeaker told those in the encampment to leave a little before 8 p.m. PT Wednesday evening.

Police are warning those in the encampment they may be "in violation of the law and subject to administrative actions."

LAPD issues city-wide "tactical alert" putting officers on notice about UCLA protest

From CNN's Josh Campbell

The Los Angeles Police Department has issued a city-wide "tactical alert" related to the unlawful assembly declared at a pro-Palestinian encampment at UCLA, a law enforcement source told CNN. 

The alert notifies all LAPD personnel that they could be called on tonight to assist with the ongoing situation on campus, if needed.

During a tactical alert, some lower-priority calls for police services may not be addressed.

Several law enforcement agencies coordinate their approach to UCLA encampment, source says

From CNN's Nick Watt

Police officers get into position as pro-Palestinian students and activists demonstrate on the campus of the University of California, Los Angeles (UCLA) on May 1.

The large law enforcement presence on UCLA's campus is comprised of several agencies to perform specific tasks to clear the encampment, according to a source familiar with law enforcement plans:

  • The Los Angeles Police Department will secure the perimeter.
  • The California Highway Patrol will enter the encampment.
  • The Los Angeles Sheriff's Department will be responsible for crowd control.

Law enforcement on site will be equipped with protective gear, including gas masks, according to the source. The UCLA hospital will also be on standby to receive anyone who may be injured, the source said.

State police deployed to University of New Hampshire and Dartmouth College took people into custody

From CNN’s Joe Sutton

Police arrest several protesters at Dartmouth College on Wednesday night.

State police were deployed to the University of New Hampshire and Dartmouth College due to “illegal activity and at the request of local law enforcement,” the New Hampshire Department of Safety told CNN.

"All individuals who were taken into custody are being processed by the University of New Hampshire Police Department and the Hanover Police Department,” said Tyler Dumont, New Hampshire Department of Safety spokesman. “The members of the New Hampshire State Police are committed to protecting the constitutional rights of Granite Staters while also ensuring those who violate the law are held accountable."

The University of New Hampshire told CNN that students supporting Palestinians had peacefully protested on campus at least seven times over the past six months.

"Despite much communication with organizers regarding the University’s expectations for conduct when exercising their free speech rights, those guidelines were ignored today. Protesters erected tents in an attempt to create an encampment on UNH property."

The university said it will protect free speech on campus but "will not allow it to be co-opted by a small group of protesters, including outside agitators.”

CNN has reached out to Dartmouth College for comment. 

Multiple people were arrested during an ongoing pro-Palestinian protest at Dartmouth College on Wednesday night, according to CNN affiliate  WMUR .

Multiple people arrested at Dartmouth College in standoff between protesters and police

From CNN’s Jillian Sykes

Police arrest several protesters at Dartmouth College on Wednesday night.

Multiple people have been arrested during an ongoing pro-Palestinian protest at Dartmouth College on Wednesday night, according to CNN affiliate  WMUR .

Video from WMUR shows police pulling protesters one-by-one from the crowd gathered on the Dartmouth Green and detaining them with zip ties.

Protesters can be heard chanting “Free Palestine” while holding banners and flags.

The crowd appears to be a mix of students and members of the community, WMUR says.

About 16 arrested following protest at University at Buffalo, school says

Approximately 16 people were arrested Wednesday night after a pro-Palestine protest at the University at Buffalo's North Campus, including students and "other individuals not affiliated with the University at Buffalo," the school said in a release.

Those people were arrested after being "advised of, and failing to comply, with an order to disperse for a violation of UB’s  Picketing and Assembling Policy  that prohibits encampments and overnight assemblies," the release reads.

"While many protesters peacefully left the area after being advised multiple times by UB Police that those remaining at the protest would be arrested if they did not disperse at dusk, unfortunately some individuals elected to ignore the requests of UB Police and were arrested."

"A few individuals" attempted to resist arrest, and two officers were assaulted, the release reads.

In an earlier  release , the university said its chapter of Students for Justice in Palestine originally organized a march at the North Campus on Wednesday. 

Around  50 people , including students and others not affiliated with the university, continued to protest into Wednesday evening, the university said.

Many left the area after warnings from university police to disperse at dusk, but others were arrested outside of Hochstetter Hall, the university said .

"While the decision to arrest individuals occurred after multiple discussions, communications and warnings to protesters, UB Police prioritized the safety and security of the university community by upholding and enforcing all applicable laws, SUNY rules and UB polices."

The university said it recognizes and respects the right to protest but emphasized that overnight assemblies and indoor and outdoor encampments are prohibited.

"The university recognizes and respects the right to protest afforded under the First Amendment," the release announcing the arrests reads. "However, those members of the university community and visitors who wish to express their viewpoints through picketing and other forms of demonstration are permitted to peacefully do so but must not violate the provisions of the  Rules for the Maintenance of Public Order of the SUNY Board of Trustees  and must adhere to UB’s  Picketing and Assembling Policy , including the prohibition of overnight assemblies, and indoor and outdoor encampments."

Five tents were previously placed on campus but were removed by protesters after they were advised by university staff and police.

Unlawful assembly declared at UCLA encampment, source says

From CNN's Josh Campbell and Nick Watt

Law enforcement has declared an unlawful assembly for a pro-Palestinian encampment at the university's quad, a source familiar with the situation tells CNN. 

Declaring a gathering unlawful is a step police typically take before ordering individuals to disperse or face arrest.

CNN witnessed more than 100 law enforcement officers from various agencies entering the campus Wednesday, including a stream of officers wearing riot helmets and carrying zip ties.

Aerial video from CNN affiliate KABC shows dozens of police vehicles and a law enforcement mobile command post gathered at the FBI's Los Angeles field office parking lot, which is approximately one mile from the UCLA encampment. 

Hundreds of people had gathered outside the encampment Wednesday evening, most appearing to be seated on the ground across from the entrance to the camp, the aerial footage shows. Inside the encampment, more than 80 tents lined the grass as people busily wove through the area.

By around 8:30 p.m., a growing line of LAPD officers had formed between the encampment and the outside group of protesters, according to a CNN crew on the scene.

This aerial view shows police vehicles and a law enforcement mobile command post gathering at the FBI's Los Angeles field office parking lot in Loas Angeles, California.

Please enable JavaScript for a better experience.

COMMENTS

  1. What is a good H-index for each academic position?

    On average and good H-index for a PhD student is between 1 and 5, a postdoc between 2 and 17, an assistant professor between 4 - 35 and a full professor typically about 30+. Our comprehensive blog delves into the nuances of the h-index, its relevance in academic promotions, and the challenges it presents. Here is a quick summary of h-indexes ...

  2. What is a good h-index? [with examples]

    What is a good h-index for a PhD student? It is very common for supervisors to expect up to three publications from PhD students. Given the lengthy process of publication and the fact that once the papers are out, they also need to be cited, having an h-index of 1 or 2 at the end of your PhD is a big achievement.

  3. What Is Good H-Index? H-Index Required For An Academic Position

    A "good" h-index can vary based on your field of study and the stage of your PhD program. Generally, for PhD students, a lower h-index is expected and completely normal. You're just beginning your journey in academic publishing. An h-index between 1 and 5 might be typical for students nearing the end of their PhD.

  4. What Is a Good H-Index Required for an Academic Position?

    H-index scores between 3 and 5 seem common for new assistant professors, scores between 8 and 12 fairly standard for promotion to the position of tenured associate professor, and scores between 15 and 20 about right for becoming a full professor. Be aware, however, that these are gross generalisations and actual figures vary enormously among ...

  5. h-index

    The h-index is a measure used to indicate the impact and productivity of a researcher based on how often his/her publications have been cited.; The physicist, Jorge E. Hirsch, provides the following definition for the h-index: A scientist has index h if h of his/her N p papers have at least h citations each, and the other (N p − h) papers have no more than h citations each.

  6. The H-Index: good or bad?

    To many professors and graduate students, the h-index is perhaps the most widely used metric in determining the influence of one's work. This single number is used to convey the influence you have had in your research career, is pivotal to career advancement, and used in part to determine the relative influence of difference academic ...

  7. Measuring academic impact: An author-level metric: the H-index

    The H-index, proposed by physicist J.E. Hirsch (hence the H) in 2005, is a way to measure the individual academic output of a researcher. ... A professor at the end of his career has published more than a PhD-student and has had more time to get cited. There are also examples of researchers changing disciplines, taking their high H-index with ...

  8. Finding an Author's H-Index

    An h-index of 20 signifies that a scientist has published 20 articles each of which has been cited at least 20 times. Sometimes the h=index is, arguably, misleading. For example, if a scholar's works have received, say, 10,000 citations he may still have a h-index of only 12 as only 12 of his papers have been cited at least 12 times.

  9. How to find your h-index on Google Scholar

    In order to check an author's h-index with Publish or Perish go to "Query > New Google Scholar Profile Query". Enter the scholar's name in the search box and click lookup. A window will open with potential matches. After selecting a scholar, the program will query Google Scholar for citation data and populate a list of papers, and present ...

  10. Measuring your research impact: H-Index

    The Web of Science uses the H-Index to quantify research output by measuring author productivity and impact. H-Index = number of papers ( h) with a citation number ≥ h. Example: a scientist with an H-Index of 37 has 37 papers cited at least 37 times. Advantages of the H-Index: Measures quantity and impact by a single value.

  11. Chapter 9: Selective Topics for PhD Candidates Understanding The h-Index

    The h-index, also known as the Hirsch index or Hirsch number, is a metric used to measure the output and impact of the researcher on the scientific field. Discover the world's research 25+ million ...

  12. Why I love the H-index

    So let's get this out into the open now, my H-index is 44 (I have 44 papers with at least 44 citations) and, yes, I'm proud of it! But my love of the H-index stems from a much deeper obsession with citations. As an impressionable young graduate student, I saw my PhD supervisor regularly check his citations.

  13. Should I put my h-index on my CV?

    The first one is the h-index will change rapidly with time, particularly for new graduated PhD students with only few years of publication history. The second one is that the h-index provides only a little information, the only possible values are likely 3,4 and 5 which can be increased with some luck.

  14. Do researchers know what the h-index is? And how do they ...

    The h-index is a widely used scientometric indicator on the researcher level working with a simple combination of publication and citation counts. In this article, we pursue two goals, namely the collection of empirical data about researchers' personal estimations of the importance of the h-index for themselves as well as for their academic disciplines, and on the researchers' concrete ...

  15. What is a good H-index?

    3. 9. >. 1. In this case, the researcher scored an H-index of 6, since he has 6 publications that have been cited at least 6 times. The remaining articles, or those that have not yet reached 6 citations, are left aside. A good H-index score depends not only on a prolific output but also on a large number of citations by other authors.

  16. H-Index

    The h-index is a measure of publishing impact, where an author's h-index is represented by the number of papers (h) with a citation number ≥ h. For example, a scientist with an h-index of 14 has published numerous papers, 14 of which have been cited at least 14 times. Image: Screenshot of some metrics listed in an author profile in Michigan ...

  17. Citation impact

    The h-index is a measure of the total, citation-based impact of a researcher.It combines scientific production (number of publications) and impact (number of citations). The h-index is the largest number h, such that the author has at least h publications that each have been cited h times.; When exploring the literature of your research field, the h-index may give you an idea of the impact of ...

  18. How to Successfully Boost Your H-Index

    The number is based on the papers a researcher publishes and the citations those papers get. Publishing a lot of highly cited articles will increase your h-index. On the other hand, getting a lot of citations on only one or two papers will not give you a high h-index. For example, if you have an h-index of 7, it means that you have published 7 ...

  19. Comparison of scientometric achievements at PhD and scientific output

    We have computed two different H-index values: the H-index at PhD includes all publications up to the year of the PhD award. The H-index ten years after PhD includes only publications published after the PhD award. The aim of this differentiation was to exclude the direct effects of publications before the PhD on subsequent H-index values (Fig 1).

  20. What is a good h-index?

    One rule that is widely accepted, however, is that an h-index score should at least be equal to the number of years a scholar has put into his or her work. This rule was prescribed by Hirsch who recommended an h-index of at least 20 after working for the same number of years. One notable observation from Hirsch's paper is that accomplished ...

  21. The use of the h-index to evaluate and rank academic departments

    1. Introduction. The h-index, proposed by Hirsch [1], has had a deep impact on the quantification of the productivity of researchers.It is defined as [2]: A scientist has index h if h of his/her N p papers have at least h citations each, and the other (N p-h) papers have no more than h citations each.Since its inception in 2005, it has stimulated a lively debate over its relevance.

  22. r/PhD on Reddit: Can someone explain h-index vs. impact factor to me in

    As others have stated, H-index is a metric referring to the number of papers (N) cited N times, usually related to authors. Impact factor is more complicated and relates to journal articles being cited X times in a 2 year period, but in general, the higher the number, the more "prestigious" the journal. However, this is usually weighted towards ...

  23. Quora

    We would like to show you a description here but the site won't allow us.

  24. Welcome to the Purdue Online Writing Lab

    The Online Writing Lab at Purdue University houses writing resources and instructional material, and we provide these as a free service of the Writing Lab at Purdue. Students, members of the community, and users worldwide will find information to assist with many writing projects.

  25. Homepage

    The mission of the Harvard Graduate School of Education is to prepare education leaders and innovators who will change the world by expanding opportunities and outcomes for learners everywhere. We're an institution committed to making the broadest impact possible, putting powerful ideas and evidence-based research into practice.

  26. Graduate Degrees: Academics: Paul H. O'Neill School of Public and

    Graduate students can use the O'Neill School's certificate programs to supplement their primary fields of study—taking, for example, the certificate courses as part of a doctoral or master's degree minor. Employees of public and private sector agencies seeking graduate-level courses, and especially those changing from professional or ...

  27. Ph.D. Exercise Science: Exercise & Kinesiology: Academics: School of

    This is a full time, face-to-face, research-based doctoral program that includes 90 credit hours of graduate study taught by nationally and internationally recognized faculty. ... Ph.D. Student, Kinesiology. 9% Expected growth rate of exercise physiologists from 2021-2031. $90 K Annual mean wage of doctorate exercise physiologists in the U.S ...

  28. Master of Public Affairs: Graduate Degrees: Academics: Paul H. O'Neill

    Earn your MPA from the Paul H. O'Neill School of Public and Environmental Affairs in Indianapolis. Our 39-credit-hour program is NASPAA accredited and can be completed on a full-time or part-time basis.

  29. Chapter 9: Selective Topics for PhD Candidates: Understanding The h-index

    The h-index is a metric used to measure the output and impact of the researcher/scholar, also it is known as the Hirsch index or Hirsch number. Discover the world's research 25+ million members

  30. May 1, 2024

    1:10 a.m. ET, May 2, 2024 USC reopens campus to school community after closing due to protesters unaffiliated with university . From CNN's Taylor Romine