Abstract

Recently, several studies adopted bibliometric methodology to estimate the development of the field of public administration (PA). However, only a small scope of journals was covered in their analyses. Few of those investigated the evolution of the entire field. To make a progress, this paper included a 19-year timespan, 53 journals, and more than 20,000 items for analysis in a bibliometric way. Both the activity and the quality indicators of research results were applied from the bibliometric perspective with 3-year and 5-year citation windows at three aggregation levels including journal, country, and institution by using publications in PA indexed in Social Sciences Citation Index (SSCI). “Resident” journal is proposed as a new concept to explore differences between traditional and emerging research forces. The results suggest that resident journals maintain a large advantage over other journals in terms of higher quality journal indicators and citation impact indicators. Moreover, international and national collaboration shows a growth tendency, especially for the international type. The majority of active institutions are from the US and the UK, which indicates their dominant position over others. This study provides more comprehensive comparisons through large-scale data and acknowledged methods to explore the development of PA field research.

1. Introduction

As a practice-oriented field, public administration has the characteristics of comprehensiveness and interdisciplinary, which means that PA field usually absorbs the knowledge, concepts, theories, and methods from other fields. To better describe and analyze the research results in this field, several studies tried to introduce bibliometric methodology to estimate the development of PA.

One common carrier of the progress of a discipline or a field is a journal, which indicates the cohesion of authors and topics around a novel area of inquiry [1]. By exploring the evolution of important journals in the field, we can begin to weave the history of a field. For instance, bibliometric methods are employed to inspect the path of the field of PA via an analysis of articles published in the journal named Public Administration Review (PAR) from 1940 to 2013 [2]. PAR is well-matched for this analysis given its historical position as one of the oldest of the 53 journals in PA field indexed in the Web of Science (WoS).

In the fields of library and information science, there are three most common terms: bibliometrics, scientometrics, and informetrics [1]. Although the three have different research objects and purposes, they have the same origin and are based on common principles, methods, and tools. Among the three terms, bibliometrics and scientometrics are the most similar, and the research fields overlap to a considerable extent, but there are also some differences. Bibliometrics is a series of methods for researching and measuring bibliographic information, while scientometrics focuses on the quantitative aspect of scientific research. From the perspective of research objects and content, scientific and technological research results are often recorded in the literature, which means that those literatures become the intersection of bibliometric and scientometric research. However, the research object of bibliometrics is always scientific and academic literature, while scientometrics also pays attention to the activities of scientific researchers, scientific research management, the role of science and technology in the national economy, and technology policy.

This study tries to introduce bibliometric and scientometric methodology to PA. The number of journals is expanded to 53 journals of the entire PA field, and the period is extended to 19 years. In addition, 19-year annual lists of journals in PA in Journal Citation Report (JCR) are used as the criteria to select journals. Although these lists change yearly, it is found that some journals always stay in the lists. To differentiate these journals from the others, these stable journals are termed as “resident” and the others as “nonresident.” This study also provides some insight into the impact of publications that appeared in three datasets (i.e., resident, nonresident, and total journals).

To provide a comprehensive and accurate definition of influence, this study estimates publications in a bibliometric way. Both the activity and the quality of research are used. The former is measured by the productivity of publications, and the latter results from Journal Impact Factors (JIF) quartile indicators and citation impact indicators. More importantly, the extended scope makes it possible to discuss the contribution of countries and institutions in PA. Specifically, this study focuses on three aggregation levels (including journal, country, and institution levels) of publications in PA to answer the following questions:(i)Can resident journals datasets represent the entire field of PA?(ii)What are the active countries in PA? Does the proportion of coauthorship remain low as previous studies mentioned?(iii)What are the active institutions in PA? Are they consistent with the active institutions mentioned in previous studies?

As a diachronic analysis, this study focuses on several bibliometric features, including paper productivity, active countries and institutions, authorship patterns, and citation patterns.

The subsequent sections of this paper are organized as follows. Section 2 gives a review of relevant research in PA and illustrates some of the existing classic studies in bibliometrics. Section 3 presents the data and methods employed in the study. Section 4 shows the results, the fifth section is about the discussion, and the final section concludes the research.

2. Literature Review

In the PA field, there is a typical and traditional practice that attempts to determine the impact of literature in a subjective way to evaluate research results. However, scholars represented by Bozeman criticized the practice [3]. They argued that obtaining opinions on research results from the subjective evaluations of a small number of scholars is likely to be subject to subjective bias, and the results may also vary widely among respondents. The debate reflects the dilemma of traditional methods of evaluating research results in PA. In this regard, Bozeman proposed another improved idea, which is to introduce relevant methods in bibliometrics into the evaluation of research results and describe the characteristics of research results in an objective way.

In terms of research objectives, bibliometrics mainly focuses on the regularity of documents in the process of utilization and communication, while scientometrics focuses more on analyzing the regularity of the amount of scientific information generation, dissemination, and utilization, to better understand the mechanism of scientific research.

Inspired by the call of Bozeman [3], some studies began to adopt bibliometric methods to PA. For instance, Powell [4] analyzed five leading journals in the field of social policy. It was reported that more than half of the articles were written by authors from the UK, and most of the rest came from other parts of Europe. Ni et al. [2] analyzed 3,934 articles published in PAR journals from 1940 to 2013. They reviewed the research progress of the journal for more than 70 years through citation analysis, collaborative network analysis, keywords analysis, and other technologies. It was a substantial attempt of bibliometric methods in PA. However, only one journal was covered in their work, which was hard to provide a comprehensive evaluation of PA.

Currently, the application of bibliometric methods to PA mainly stays at the level referred to one or several journals, but few studies have analyzed the entire field. This study reviewed six studies published from 2010 to 2017 as shown in Table 1.

There was only one study by Corely and others [5] that sorted out the entire PA field, whose focus was merely on output and authorship analysis. In general, a bibliometric research has a focus on several analyses on productivity, authorship, citation, and social network. However, only one research conducted by Ni et al. [2] involved all the above analyses. It is essential to find some bibliometrics and scientometrics methods that might be applied in PA. Mingers and Leydesdorff [1] presented a comprehensive review of the theory and practice of scientometrics. In terms of the analysis of collaboration and citation impact, some researchers suggested that the research unit should be distinguished in different aggregation levels [1012]. These units range from macro-level to micro-level, including countries, institutions, research groups, and individual researchers.

Publications coauthored by more than one research unit can be analyzed to examine the research collaboration between the research units. Based on the bibliometric data of about 4.5 million articles in 56 subject categories in SSCI, Henriksen [13] found that the average number of authors, as well as the proportion of coauthored and internationally coauthored publications, increased in most categories. However, there are large variations among disciplines to the extent of the increasing proportion of coauthorship. The coauthorship has generally increased in the category with a high proportion of international coauthorship, but the increased international coauthorship is not the only factor that affects the growth of the author's collaboration.

Citation analysis based on citations received by scientific publications typically uses citation impact indicators to assess the scientific influence. Waltman [12] gave an in-depth review of the literature on citation impact indicators. Citation impact indicators are based on analysis of citations obtained from all publications published by a research unit. Well-known examples of impact indicators are JIF and h-index [14, 15].

Citation impact indicators can be classified into size-dependent and size-independent indicators. Size-dependent indicators are designed to provide an overall performance measure. These indicators will never decrease when new publications are added. On the other hand, size-independent indicators are intended to provide an average performance measure for each publication, which is often used to compare units with different sizes, such as comparing between a small research unit and a large one [12]. Mean-based indicators are one type of size-independent indicator, including impact factors, mean normalized citation score (MNCS), and relative citation rate (RCR), but all of them are criticized in the literature [16]. The citation distribution of a publication set tends to be highly skewed, so the average number of citations may be heavily influenced by one or a few most cited publications [17, 18].

Percentile-based methods (quartiles, percentiles, percentile rankings, or percentile ranking classes) have been proposed as a nonparametric alternative to parametric mean-based approaches [19]. Percentiles are based on an ordered set of citation counts in a reference set, whereby the proportion of publications at or below the citation counts of a certain publication is used as an indicator for its relative citation impact. For a given publication, the average citation counts are not used to normalize it. On the contrary, publications similar to it are grouped into a reference set, and the citation impact is calculated by its rank in the citation distribution of the set [20].

The Leiden Ranking is a famous university ranking at the institution level based on bibliometrics, made by the Centre for Science and Technology Studies (CWTS) of Leiden University. Nearly 1,000 major universities worldwide have been covered by the 2018 edition of the ranking. Because of the mature way of ranking, this paper will select some methods and indicators of the Leiden Ranking (e g., PPtop 1% and PPtop 10%) for research evaluation at the country and institution levels [18].

3. Methods

3.1. Data Collection

The study is based on data from the Web of Science (WoS) database from Clarivate Analytics. There are two document types (i.e., article and review) included, which are regarded as citable items for citation analysis. Publications in a 19-year period indexed in SSCI of WoS are used for analysis. Timespan is set from 1998 to 2016 for two reasons. Firstly, the institution with which the authors affiliate only purchases SSCI starting from 1998. Secondly, data in a 19-year period can serve to trace and analyze the evolution of the field.

JCR in WoS is used to determine the field of PA and the scope of journals. According to journal lists of JCR 1998–2016, journals in PA have changed almost every year. With journals added and removed in the period, the number of journals increases from 24 in 1998 to 47 in 2016, and 53 journals are involved totally. The 53 English language journals were all included in this study to cover all journals in PA.

In JCR 1998–2016, the list of journals in PA varies over time, and some journals on the list also experienced several changes (e.g., the full name of journal Governance changed in the year 2008, while ISSN of Journal of Homeland Security and Emergency Management changed since 2012) in this period. To recognize those journals clearly, 54 ISSNs are used to retrieve and screen data in SSCI. The process is shown in Figure 1.

According to the steps mentioned above, 23,243 items (including 22,690 articles and 553 reviews) were downloaded on Jan 13th, 2018. The criteria to determine whether a publication belongs to PA are that the journal where the publication was published should be in the PA journal list in its publication year. For example, an article published in the Policy Studies in 2010 was not included in the dataset of this study, because the journal was included in the PA journal list since the year 2011. According to the above criteria, filtered data contains 20,980 items, involving 20,480 articles and 500 reviews. For statistical analysis, a statistical software, Statistical Product and Service Solutions (SPSS) v23.0, was utilized.

3.2. Journal Classification

In this paper, two methods were used to classify journals. The first is based on the appearance of journals in the annual lists of PA. Although the lists have changed almost every year, still it was found that 24 journals have been staying on the lists for the past decade (i.e., 2007–2016). To distinguish them from others, they were labeled as “Resident” journals, while the other 29 journals were named as “nonresident” journals. All publications were grouped into three datasets as follows:(i)Dataset I: dataset with data from 53 journals in total(ii)Dataset II: dataset with data from 24 resident journals(iii)Dataset III: dataset with data from 29 nonresident journals

The subsequent analysis listed the contributions of countries and institutions in the three datasets to reflect their performance. A list of the 53 journals is illustrated in Table 2 and resident journals are specifically noted with a symbol †.

The second classification is related to the JIF values of journals in JCR 1998–2016. Journals were assigned to the four quartiles 1, 2, 3, and 4, based on their JIF rankings in PA. Referring to the study by Zhou and Lv [21], journals in higher rankings (i.e., Q1 and Q2) were named as higher quality journals and those in Q3 and Q4 as lower-quality journals when it was essential. Publications on higher quality journals can be considered of higher quality, although not always true. The reason is that not all publications in a higher quality journal are of higher quality than those in a lower-quality journal. Therefore, we assumed that a country with more publications in higher quality journals has more competitive capacity. The JIF quartile indicator was used to reflect the influence of major contributors.

3.3. Research Unit

In practice, the record in the WoS often contains an author information field and an affiliation field. The names of authors and the addresses corresponding to authors are recorded in the affiliation field. An address often contains information about a country, an institution, and even a postcode. To make full use of the information, the research unit in this study mainly involves two aggregation levels: country and institution. The active countries and institutions were analyzed as the active players in PA.

The affiliation of authors is based on the corresponding information in the address field of WoS data. Two countries need to be further clarified. The first is the UK. The address of an author from England, Scotland, Wales, and Northern Ireland was classified into the UK. The second one is China. An author who comes from mainland China, Hong Kong, and Macao was regarded as the one from the Peoples Republic of China, or China in short. Because the year 1998, which this study starts from, is later than 1997 (the year when Hong Kong returned to China), the earliest publication from Macao was published in 2000 later than 1999 (the year when Macao returned to China).

In the identification and correction process for institutions, different variants of an organization's name were identified, including abbreviations, misspellings, historical name changes, and classified into the same institution. Additionally, as mentioned by Ni et al. [2], there is a confusion of organization between system-level and campus-level. To facilitate the distinction, we correct the organizations with the reference to some popular university rankings. The records with system-level organizations were identified and split into campus-level institutions.

There are two typical instances. (i) The University of London is a system-level institution whose records are assigned to campus-level institutions, such as the London School of Economics and Political Science (LSE), University College London (UCL), and Imperial College London. (ii) The records affiliated to Indiana University System are split as campus-level institutions, including Indiana University Bloomington and Indiana University Purdue University Indianapolis (IUPUI).

Referring to some research [10, 22], publications can be divided into two categories: single-authored and coauthored. In addition, publications can also be divided into two categories: national and international. By combining these two dimensions, four types of publications are classified. Together with the type of publications without affiliation information, five types of items are shown in Table 3.

Despite the perfect records in the WoS in recent years, there are still problems in early records, which did not contain author or affiliation information. Therefore, two cases of data were marked and processed as follows.

Case 1. There were three items that contained an empty author information field. After analyzing each item, it was found that all of them were written by Perri Six, a noted social scientist in the UK. After manual identification, the three publications were included in this study.

Case 2. The items contained an empty affiliation field, resulting in the number of some countries being counted as zero. It means that these countries were excluded when analyzing active countries and institutions but included in other analyses at the journal level. The total number of such items was 949, accounting for 4.08% of the total number.
The lack of affiliation information is due to two reasons: first, there were two main objective reasons that the authors of a publication forgot to add affiliation information when it was submitted, or the database did not save this information at an earlier time. Second, subjective reasons are much more complicated, including the author's judgment on the contribution of a publication and considerations for protecting personal privacy. We cannot manually add it, even if an author’s affiliation is specific at a certain time, since it is likely to be the author's deliberate choice to hide affiliation information, and the institutional contribution to the publication should be subjectively judged by its authors.

3.4. Citation Analysis

Citation analysis is a common method of research evaluation, using some indicators such as total citations, citations per publication, and the proportion of highly cited publications. This study focused on the citation impact of resident journals and the total journals dataset to evaluate the performance of some countries as a way to reflect influence.

The citation window is a time window for counting the citations of a publication. There are different citation patterns in different fields. A 3-year window is a good trade-off choice among different fields [23]. Bornmann, et al. [19] suggested that a reliable approximation for long-term citation impact (e.g., correlation coefficient r ≥ 0.8) requires a citation window of at least 5 years. Waltman [12] believes that the general approach is to use a 5-year window. Given that PA is a discipline in the social sciences, both 3-year and 5-year windows were adopted to better reflect the citation patterns in PA.

Citation data included in this study is from the WoS in the period 1998–2017. According to the definition of citation window, only the data in 1998–2015 is available when the 3-year window is used, and only the data in 1998–2013 is included when the 5-year window is used.

This work used both mean-based and percentile-based indicators. Citation per publication (CPP) is a frequently used mean-based indicator that intuitively reflects the average citations that a set of publications received from publications. A set of publications can be defined at different aggregation levels and periods. The citation impact of the research unit can be illustrated by analyzing its corresponding set of publications.

The proportion of highly cited publications is a form of percentile-based indicator. Reference sets used in the percentile-based approach are defined as a set of publications in the same field, same publication year, and same document type. A percentile-based indicator is usually defined and named by a specific threshold for the indicator, including top 1%, 5%, and 10%. According to the calculation of percentiles drawn from the Leiden rankings [24], all the three indicators are included, labeled as PPtopx%.

4. Results

To provide a clear picture to answer the questions mentioned in the first section, this part is organized into five subsections. The first three subsections are about productivity, and the following sections focus on research quality.

4.1. Overview on Productivity

Figure 2 illustrates the annual changes of journals and publications in PA. In general, the number of journals rises from 24 in 1998 to 47 in 2016, while the number of publications also grows from 734 in 1998 to 1697 in 2016. However, both the number of journals and publications increased steadily with a slight fluctuation in the period of 1998–2006. Then, the two numbers increased rapidly from 2007 to 2011 but have returned to a steady-state since 2012.

The cases in datasets of resident and nonresident journals are also shown in Figure 2. In the last decade, although the number of resident journals did not change, the number of publications has shown a decelerated growth trend. It suggests that some resident journals are also expanding their size of publications cautiously. However, the number of publications in nonresident journals has grown substantially with the increasing number of journals. It indicates that articles in non-resident journals are the main driving force for the growth of the number of publications.

About the changing number of publications (Figure 2), the 19 years can be divided into three stages, namely, 1998–2006, 2007–2011, and 2012–2016. During the first and third stages, the number of journals and publications is stabilized, while those in the second stage 2007-2011 experience a dramatic growth.

According to the affiliation information in the address field, the number of authors and countries is counted. Figure 3 shows the annual evolution of the average number of authors and countries for each publication. The values of authors and countries in the field increased from 1.53 to 0.95 in 1998 to 2.19 and 1.28 in 2016. It implies an increasing collaboration in PA for the increasing use of statistics and register/survey data [13]. In 1998–2002, the average number of countries was less than 1, which was due to the lack of address information in some of the data. In subsequent analysis, it can be observed that the lack of address information decreases year by year.

Generally speaking, although the number of publications as the denominator increases, all the quotients show a growing trend. The average number of authors, however, grows faster than that of countries. The actual growth rate of the average number of countries will be slower when the impact of missing data is removed. During those 19 years, the performance of the resident journals dataset changed in line with that of the total journals dataset. In contrast, the performance of nonresident journals is fluctuating, but its trends became stable after 2012 and are close to overall performance. The previous volatility was mainly due to the small size of nonresident journals that are vulnerable to extreme values.

4.2. Active Players

The top 20 countries that published papers in the 19 years are shown in Table 4. It is clear that the US and the UK are the major contributors, which account for 58% of the total published papers. The percentages of publications from the countries ranking 2–5 published in resident journals are greater than the world average (78.3%), which indicates that they focused more on publishing papers in resident journals. On the contrary, those of the US in the three stages are lower than the world average (86.1 < 91.4, 79.8 < 81.9, 63.3 < 64.5), which suggests that American authors have contributed more to nonresident journals than the resident journal. However, from the perspective of three stages, the shares of publications of resident journals in almost all countries are declining, which implies that the gap between resident and nonresident journals is narrowing. Some countries show a preference for resident journals, whose percentages of resident journals in the three stages are higher than the world average. For example, countries ranking 2–5, as well as Norway, Sweden, France, and New Zealand, always focus more on publishing in resident journals.

From 1998 to 2016, the top 20 institutions are given in Table 5. All of the 20 institutions are universities, which indicates that the role of the university is far stronger than other types of institutions.

The distribution of the top 20 institutions among countries is highly skewed. The number of institutions from the US and the UK is 10 and 5, respectively. There are only two institutions in the Netherlands, and one in Australia, Canada, and China.

According to the interval mean of the percentage of items from resident journals, 15 of the 20 institutions have mainly published publications in resident journals. Unlike previous studies with data from a single journal PAR, we found that the most active institution in past 19 years is Cardiff University from the UK, which is different from their result [2]. The aggregation effect of the university system on publications productivity and citations will not be reflected in our research, because we have split the system-level institutions into the campus-level. However, Indiana University, Bloomington, is still an important part of the system and is ranked 9th. A similar situation occurs at the University of Wisconsin, Madison. The institutions that appeared on both two lists are all from the US, including the University of Georgia, Harvard University, Texas A&M University, Syracuse University, and American University.

4.3. Authorship Style Analysis

The proportions of different types of publications (Table 3) in the three datasets are presented in Figure 4. Overall, both internationally and nationally coauthored publications have increased, from 3.81% to 34.06% (in 1998) to 21.80% and 46.43% (in 2016), respectively. In contrast, the proportion of national single authorship decreases from 50.82% in 1998 to 30.34% in 2016. The collaboration in PA is growing, and the international collaboration increases significantly. The phenomenon also occurs in several fields of social sciences, which tend to use quantitative research methods, experiments, and labor division [13]. The total proportion of coauthorship is 68.23% in 2016, which is still lower than that in 2013 in the fields of Business and Economics as shown in the study by Henriksen [13]. In addition, the case of the missing address information reduces from 10.35% in 1998 to 0.29% in 2016, which implies that the quality of the records indexed by WoS becomes better. It is worth noting that the share of international single authorship shows a fluctuation but stabilized at around 1%. The main reason is that these authors are affiliated with more than one country [2].

Figure 4 illustrates that the changes in resident journals are similar to those in all journals, while the nonresident journals are quite different. To verify the judgment, the Paired Samples T-test and the Wilcoxon Rank test were used to investigate three indicators: the proportions of internationally and nationally coauthored publications as well as those of national single authorship. At first, the international items of resident and total journals dataset are paired. Then, the differences are calculated, and a T-test is performed by SPSS. The other two indicators are tested likewise.

Table 6 illustrates the outputs of the Paired Samples T-test. The p values of the three types are all greater than 0.05, indicating that the null hypothesis is acceptable (i.e., there is no statistically significant difference between the two datasets).

Table 7 shows the results of the Wilcoxon rank test with three types of p values of 0.020, 0.494, and 0.629, respectively. It means that, for internationally coauthored papers, there is a statistically significant difference between the indicators of the two datasets, while the others are not different.

Based on the results of Table 6 and 7, it indicates that the difference between the two datasets is small with respect to the national coauthorship and single authorship indicators. However, there is a gap between the two datasets in terms of internationally coauthored papers, and the reasons are explained in the subsequent analysis.

According to the productivity of the 20 countries in Table 4, this article focuses on the top 10 countries in the analysis below. The number of papers in these 10 countries accounts for more than 70% of the world (73.65% for all journals and 77.69% for resident journals). Types of authorship of the top 10 countries in total and resident journals dataset are detailed in Figure 5.

Figure 5 shows that there is almost no difference between the two datasets. However, there are some particular performances in the three stages. In terms of the international coauthorship indicator, China and Spain showed an increase and then a decrease, while the proportions of the other eight countries increase to various extents. National single authorship of nine countries except Spain presents a downward trend. Overall, in the first stage, nationally single-authored publications play major roles in these countries. In the final stage, the major roles are gradually replaced by coauthorship. The result is also in line with our judgment from Figure 4. However, the dominant forms of collaboration are different as shown in 3a and 3b of Figure 5: the top 4 countries are dominated by nationally coauthored papers, while the countries ranking 5–10 are led by international collaboration.

It is noted that, in 3a and 3b of Figure 5, the differences in the proportion of internationally coauthored papers between the two datasets in the UK, Canada, Germany, and Spain are more obvious. These countries have wider proportions of international coauthorship in resident journals, which indicates a gap between the two datasets. It can be seen that these countries prefer to conduct international collaboration in resident journals. Considering that the top 10 countries are responsible for more than 70% of the papers, the results also partially explain the differences in the outputs of Table 6 and 7.

4.4. Publications in JIF Quartile

The distribution of publications of JIF quartiles is illustrated in Figure 6. Data from resident and nonresident journals are characterized by different fillers and colors. In the early period, resident journals account for a large proportion of the four quartiles. However, in the third stage, the resident journals only occupy an absolutely leading position in the Q1 zone, accounting for 90.24%. In Q2, Q3, and Q4 zones, they only accounted for 52.76%, 63.32%, and 39.33%, respectively. It implies that publishing a paper in a resident journal may obtain a higher impact factor.

According to data from the total and resident journals dataset, the performance of the top 10 countries in JIF quartiles is presented in detail in Figure 7. Germany, Denmark, and Sweden published more in Q1 and Q2 zones, which indicates that scholars from these countries are more able to publish papers in higher quality journals. On the contrary, papers in Canada and Australia are mainly published in lower quality journals. Resident journals dataset differs from all journals dataset, especially in the second and third stages. Most countries perform better in the resident journals dataset than in the total journals dataset, except for the second stage of the Netherlands and the third stage of Canada.

4.5. Citation Analysis

In this paper, we use both CPP and PPtop x% indicators to investigate the impact of the three datasets.

4.5.1. Citation per Publication (CPP)

Based on two citation windows, the annual change of the CPP indicator is presented in Figure 8. Each of the six CPPs illustrates a growth, although with some fluctuations. It indicates that the opportunity to be cited is growing because of the increasing number of publications in WoS. The changes in the resident journals dataset are almost identical to those in the total journals dataset in both citation windows. Resident journals performed better than the overall journals significantly since 2009, which can also be reflected in the widening gap between resident journals and nonresident journals.

According to data from the total and resident dataset, the CPPs for the top 10 countries are shown in Figure 9. Because of the patterns presented in Figure 2, we calculated the CPPs in three stages. The first two are 1998–2006 and 2007–2011 in line with the prior analysis, but the third is different. Since the citable data are limited by the choice of the citation window, the 3-year window corresponds to data in 2012–2015, and the 5-year window corresponds to the period 2012–2013.

Figure 9 shows that indicators in the resident journals dataset are higher than those in the total journals dataset. The differences between the countries presented by the two citation windows are not clear. A subtle difference is reflected between Germany and Sweden, where the former performs better in the 3-year window, and the latter is better in the 5-year window.

In general, it does not mean that countries with greater productivity perform better. For example, the CPP of the United States with the largest number of publications is located in the medium of the top 10 countries. In contrast, CPPs of the UK, the Netherlands, Germany, Denmark, and Sweden are higher, which implies a better quality of publications in these countries. Figure 7 indicates that the higher quality of Germany, Denmark, and Sweden is mainly because they publish more in the Q1 zone, while the UK and the Netherlands have a high proportion of papers in the Q1 and Q2 zones.

4.5.2. Highly Cited Publications

Top 1%, 5%, and 10% are employed to better explore the citation patterns. To present the gap between resident and nonresident journals dataset, the percentage of publications in resident journals to the total journals is calculated, and the changes in the proportion of highly cited publications in resident journals are shown in Figure 10. Overall, the percentage of highly cited papers in resident journals accounts for more than 65% of the total, which means that resident journals are the major contributors of highly cited papers. However, the overall trend of these ratios is decreasing, reflecting a narrowing gap between resident journals and nonresident journals.

Based on 3-year and 5-year windows, the number of the top 1%, 5%, and 10% most cited publications of the top 10 countries is counted separately. According to the Leiden ranking, the values are recorded as P top 1%, P top 5%, and P top 10%. The normalized indicators recorded as PP top 1%, PP top 5%, and PP top 10%, can measure the proportion of the most cited papers in the total number of papers published per country. For example, in 2b of Figure 11, according to the 3-year window from 2012 to 2015, the number of the top 5% cited papers and the total number of papers produced by the United States are 224.5 (not an integer due to the fractional counting method) and 2139. PP top 5% indicator = 224.5/2139 = 10.5% > 5%. It means that the top 5% cited papers of the US are greater than the world average.

The normalized indicators are shown in Figures 11 and 12, which correspond to the 3-year and 5-year windows, respectively. There is no significant difference in the growth or decrease trend of the two datasets in the three stages, but the growth rate is different. In general, the resident journals dataset performs better than all the journals datasets. Due to the choice of different windows, there will be some unique results. To ensure reliability, we mainly focus on the common phenomena in the two windows.

For the PP top 1% indicator, it can be seen that the United States, the United Kingdom, the Netherlands, Denmark, and Sweden have consistently exceeded the world average for three stages. Germany and Spain have experienced significant fluctuations, but there is still a downward tendency. Notable improvement has been made in Canada and the Netherlands. Although China has made progress, the extent is still small. The values of China reach the world average in the year 2012 and 2013 for the 5-year window but not for the 3-year window. It shows that China still has a gap with other countries in producing higher-level publications.

For the PP top 10% indicator, the gap between the top 10 countries is not as large as the PP top 1% indicator. The US, the UK, the Netherlands, and Germany have consistently exceeded the world average in three periods. It is noteworthy to describe the situation of Germany. Regardless of the fact that Germany does not perform well on the PP top 1% indicator, it performed better than the world average in terms of the other two indicators. The fact may indicate that Germany has muscular strength but may lack Olympic players. Contrary to the situation in Germany, China's performance in the PP top 10% indicator is not as good as the PP top 1% indicator, indicating that China's capacity still needs to be strengthened. As a transition indicator, the result of PP top 5% shows more similarity with that of PP top 10% than PP top 1%.

In general, the US, the UK, and the Netherlands are always the main players in publishing highly cited papers.

5. Discussion

The resident journal is an interesting phenomenon observed in this study when examining the journal lists of specific subject categories of JCR. Resident journals and nonresident journals may represent traditional and emerging forces in the discipline, respectively. These forces are ostensibly represented by journals, but they may potentially reflect the objective patterns and subjective capacities of individual researchers, institutions, and countries.

The application of a 10-year window as a threshold for the definition suggests that resident journals are essentially a concept based on the time span, which means that the set will vary as time windows change. However, as mentioned earlier, the dataset of journals in PA has expanded dramatically since 2007 and did not stabilize until around 2012. Therefore, choosing the 10-year window, which includes a five-year growth period and a five-year stabilization period, can better reflect the evolution tendency between traditional and emerging forces during the expansion period of the journal list in PA.

For some indicators related to productivity (such as the number of publications, the number, and proportion of authorship), the difference between residents and other journals is not very significant. However, for indicators related to research quality, resident journals outperformed others. The results of the work indicate that the resident journal, which represents the traditional forces, still dominates PA. The resident journals in this field have better representativeness, regardless of productivity, national and institutional contributors, collaboration patterns, JIF quartile indicators, and citation impact. Moreover, it can be seen from the research unit analysis that resident journals have many immovable contributors to inactive countries and institutions.

Overall, the questions raised in Section 1 can be answered as follows.

At the journal level, resident journals have greater influence in PA compared with nonresident journals. However, the gap between them is narrowing at some indicators. Publications in resident journals dataset and total journals show great similarities, indicating that resident journals have certain representativeness, but recently this similarity is weakening.

At the country level, it shows that the activity measured by the productivity of publications is not strictly linked to the quality of research represented by higher quality journals and citation impact. Similar to the results of previous studies, the proportion of collaboration in PA is still lower than that in some disciplines in social sciences, but both international and national collaborations have a growth tendency.

At the institution level, the distribution of active institutions in PA is highly skewed among countries. Most of the active institutions are from the US and the UK, which indicates that these two countries still hold the dominant advantage. These active institutions are less dependent on resident journals over time.

It is noteworthy that the finding also supports some conclusions of previous studies. For example, the contribution of collaboration (including those that are international and national) generally shows an increasing trend. However, compared with some fields in social sciences, the share of coauthorship in PA is still low. In addition, in the research of the journal PAR [2], many of the top 15 institutions also appeared in the above list of institutions (Table 5).

6. Conclusion

Bibliometrics is a series of statistical methods for researching and measuring bibliographic information. More specifically, bibliometrics describes the characteristics of research results in an objective way, which can address the subjective bias from a typical and traditional evaluation in PA field. To provide a comprehensive and accurate definition of influence, this study estimated publications in a bibliometric way. Both the activity and the quality indicators of research results were applied to various articles published in a 19-year period. A 3-year and a 5-year citation window were employed at three aggregation levels including journal, country, and institution by using publications in PA indexed SSCI. The scope of this study has expanded from single or several journals to 53 journals, covering the entire field of PA. Various methods and indicators from scientometrics and bibliometrics were used to reflect influence, and a larger-scale empirical analysis was conducted in this study than previous studies.

A new concept named “resident” journal is proposed to explore differences between traditional and emerging research forces. The results suggest that resident journals maintain a large advantage over other journals in terms of higher quality journal indicators and citation impact indicators. Moreover, international and national collaborations show a growth tendency, especially for the international type. The majority of active institutions are from the US and the UK, which indicates their dominant position over others. This study provides more comprehensive comparisons through large-scale data and acknowledged methods to explore the development of PA field research.

Data Availability

The datasets used and/or analyzed during the current study are available from the corresponding author upon request.

Conflicts of Interest

The author declares that he has no conflicts of interest.

Authors’ Contributions

Zepeng Yu designed the study, analyzed data, wrote the manuscript, and contributed to writing of the manuscript.

Acknowledgments

The author is grateful to professor Loet Leydesdorff for the software “isi.exe” to extract and analyze data. This study was grateful to Clarivate Analytics for raw data of the journals in the field of public administration.