When submitting a research paper to a journal, it’s necessary to make sure that the journal is a good fit for your paper. We’ve already discussed the factors to consider while making this decision and how to choose the right journal, but now we’ll focus on one specific criterion: journal ranking and metrics.
The open-access publishing approach has resulted in a plethora of journals, each focused on a certain discipline or research area. However, just because there are so many alternatives doesn’t mean they’re all of the high quality.
What are Journal Metrics?
Journal metrics are used to compare, rank, and assess research and scholarly publications. They’re also known as journal rankings, journal reputation, or the impact of a journal. Scholars and researchers can compare scholarly periodicals using journal metrics.
The Journal Impact Factor, developed in the 1950s and available through Thompson Reuters’ Journal Citation Reports, is the original citation impact metric. CiteScore, Eigenfactor, Google Scholar Metrics, SCImago Journal & Country Rank (SJR), and Source Normalized Impact per Paper (SNIP) are some of the more recent free journal metrics that have been developed.
To determine a journal’s importance to the research community, each journal ranking metric has its own formula. Counting the number of times the journal has been cited in other works is one of them. Since the formulas and technique for each metric varies, the results will vary.
For example, an Eigenfactor score considers the journal’s size, giving larger journals more weight than smaller journals, but other measures do not. Comparing the results of multiple metrics can give you a better idea of a journal’s true impact.
So, What Factors Can a Researcher Use to Determine the Quality and influence of Academic Journals? Let’s Have a Look!
1. Impact Factor
This is one of the journal metrics that is based on the number of citations for each article in the journal. Essentially, it measures the number of the journal’s publications that have been cited in future research.
The impact factor is usually determined over a two-year period, based on the number of articles published in year X divided by the number of times they were cited in the next two years.
For example, the impact factor of a journal in 2019 is calculated using articles published in 2017 and 2018. An impact factor report issued in 2020 calculates the impact factor of a journal in 2019.
While it is the most prevalent method of determining a journal’s influence, it is not without its limitations. For example, as it is only an average calculation, a journal’s impact factor might be high even if only a few articles have been widely cited. Moreover, impact factors are not standardized. This makes comparing rankings across disciplines difficult.
Another metric that aims to rank the importance of a journal is Eigenfactor. It evaluates a journal’s influence based on its presence in academic networks, rather than just counting citations.
To put it another way, it is dependent on the work of a journal being cited in other respectable journals. The greater the Eigenfactor value, the closer the network is. It’s determined by an algorithm that ranks journals according to these criteria.
This calculation is done over a five-year period, and citations within a journal are not taken into account.
3. SCIMago Journal Ranking (SJR)
This system of journal ranking takes into consideration both of the above-mentioned criteria: it analyses both the number and origin of citations credited to a journal. Every citation is assigned an SJR value depending on the number of times it has been cited and the sources it has been cited in. As a result, the higher the ranking, the more prestigious the sources are.
The SCIMago calculation is as follows: the number of citations in a journal in X year divided by the number of articles published in the previous three years.
It’s similar to Eigenfctor in that they both use network theory to reach at their calculations.
4. Source Normalized Impact per Paper (SNIP)
Using Scopus data, NIP measures the number of citations received by articles in the journal to the number of citations expected for the topic field. The SNIP report is released twice a year and covers a three-year period.
The amount of citations per publication in a journal is divided by the field’s citation potential.
The citation potential of a journal is measured by the number of articles that cite it. This indicates that if an article receives a citation in a subject where citations are few, the citation will be valued higher.
SNIP normalizes its sources to make cross-disciplinary comparisons. In reality, this implies that a citation from a publication with a lengthy list of references is worthless.
Citations from publications classified as “non-citing sources” are not considered. Trade journals along with a few arts and humanities titles are among them.
It is based on the data in the Scopus database. As a result, certain references will be disregarded.
5. h-index Number
Unlike the other methods of ranking, the h-index is an author-level metric. It essentially serves as an indicator of how active a researcher is in their field.
If each of a researcher’s publications has been cited more than their total number of publications, they are assigned an h-index number. For example, a researcher’s h-index value is 24 if they have produced 24 articles, each of which has been referenced at least 24 times.
Companies often calculate Altmetric scores. They can’t be calculated manually for this.
Depending on the company and the information they use, several sources are used in altmetrics calculations. However, a high altmetric score often implies that an item has received a lot of attention, as well as what the company has decided to be “quality” attention (i.e. a news post might be more valuable than a Twitter mention).
Keep in mind that just because something gets a lot of attention doesn’t mean it’s essential or even great. That’s why combining altmetrics with impact factor might be beneficial.
However, When Considering any Citation Metric, Keep in Mind the Following:
- Citations aren’t always a good indicator of quality. A citation simply indicates that the author of an article or book has chosen to reference another scholarly work in their work. However, you’ll need to know the context of that citation to determine whether or not it’s positive. An author may cite an article to reference theories or findings that they feel are incorrect or obsolete.
- Discipline-specific citation patterns exist. It is common to cite a significant number of relevant articles in some topic areas, whereas a short list of references is more common in others. You should also be aware that articles in some disciplines, such as the arts and humanities, are often referenced for far longer periods of time than articles in disciplines like science and medicine, which focus on the most recent research.
- Citations differ depending on the type of article. Review articles, for example, which provide a wide overview of a research topic, are frequently cited. A research article showing null results, on the other hand, may be referenced rarely, despite being an essential addition to the scholarly record.
- Citations will be influenced by the subject matter. An article on a widely discussed topic or one that is especially topical is likely to earn a large number of citations. A highly specialized article in a small field, on the other hand, may receive very few citations, regardless of its scholarly quality.
- The readership of a journal may have an impact on citation levels. Though their content may have a broad influence, journals aimed primarily at practitioners, policymakers, or members of the public are considerably less likely to gain citations in other scholarly publications.
We hope that this introduction to journal metrics may help you in navigating the complex world of academic publishing.