Published on in Vol 23, No 9 (2021): September

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/29239, first published .
Scholarly Productivity Evaluation of KL2 Scholars Using Bibliometrics and Federal Follow-on Funding: Cross-Institution Study

Scholarly Productivity Evaluation of KL2 Scholars Using Bibliometrics and Federal Follow-on Funding: Cross-Institution Study

Scholarly Productivity Evaluation of KL2 Scholars Using Bibliometrics and Federal Follow-on Funding: Cross-Institution Study

Original Paper

1Clinical and Translational Science Collaborative, School of Medicine, Case Western Reserve University, Cleveland, OH, United States

2Health Sciences Library, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States

3School of Information and Library Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States

4North Carolina Translational and Clinical Sciences Institute, School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States

5Division of General Medicine and Clinical Epidemiology, School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States

6Center for Clinical and Translational Science, Mayo Clinic, Rochester, MN, United States

*all authors contributed equally

Corresponding Author:

Fei Yu, MPS, PhD

Health Sciences Library

University of North Carolina at Chapel Hill

335 S. Columbia Street CB 7585

Chapel Hill, NC, 27599

United States

Phone: 1 9199622219

Email: feifei@unc.edu


Background: Evaluating outcomes of the clinical and translational research (CTR) training of a Clinical and Translational Science Award (CTSA) hub (eg, the KL2 program) requires the selection of reliable, accessible, and standardized measures. As measures of scholarly success usually focus on publication output and extramural funding, CTSA hubs have started to use bibliometrics to evaluate the impact of their supported scholarly activities. However, the evaluation of KL2 programs across CTSAs is limited, and the use of bibliometrics and follow-on funding is minimal.

Objective: This study seeks to evaluate scholarly productivity, impact, and collaboration using bibliometrics and federal follow-on funding of KL2 scholars from 3 CTSA hubs and to define and assess CTR training success indicators.

Methods: The sample included KL2 scholars from 3 CTSA institutions (A-C). Bibliometric data for each scholar in the sample were collected from both SciVal and iCite, including scholarly productivity, citation impact, and research collaboration. Three federal follow-on funding measures (at the 5-year, 8-year, and overall time points) were collected internally and confirmed by examining a federal funding database. Both descriptive and inferential statistical analyses were computed using SPSS to assess the bibliometric and federal follow-on funding results.

Results: A total of 143 KL2 scholars were included in the sample with relatively equal groups across the 3 CTSA institutions. The included KL2 scholars produced more publications and citation counts per year on average at the 8-year time point (3.75 publications and 26.44 citation counts) than the 5-year time point (3.4 publications vs 26.16 citation counts). Overall, the KL2 publications from all 3 institutions were cited twice as much as others in their fields based on the relative citation ratio. KL2 scholars published work with researchers from other US institutions over 2 times (5-year time point) or 3.5 times (8-year time point) more than others in their research fields. Within 5 years and 8 years postmatriculation, 44.1% (63/143) and 51.7% (74/143) of KL2 scholars achieved federal funding, respectively. The KL2-scholars of Institution C had a significantly higher citation rate per publication than the other institutions (P<.001). Institution A had a significantly lower rate of nationally field-weighted collaboration than did the other institutions (P<.001). Institution B scholars were more likely to have received federal funding than scholars at Institution A or C (P<.001).

Conclusions: Multi-institutional data showed a high level of scholarly productivity, impact, collaboration, and federal follow-on funding achieved by KL2 scholars. This study provides insights on the use of bibliometric and federal follow-on funding data to evaluate CTR training success across institutions. CTSA KL2 programs and other CTR career training programs can benefit from these findings in terms of understanding metrics of career success and using that knowledge to develop highly targeted strategies to support early-stage career development of CTR investigators.

J Med Internet Res 2021;23(9):e29239

doi:10.2196/29239

Keywords



Evaluating outcomes of a Clinical and Translational Science Award (CTSA) hub’s clinical and translational research (CTR) training, such as the KL2 program, requires the selection of reliable, accessible, and standardized measures. The KL2 program is a multiyear mentored training award focusing on early-stage career development of investigators in CTR. The National Center for Advancing Translational Science (NCATS) at the National Institutes of Health (NIH) funds over 60 KL2 programs across CTSA hubs. All CTSA hubs offer a KL2 program, which is a formal, mentored training experience for scholars with doctoral degrees. Each CTSA hub selects KL2 candidates from a variety of fields (eg, medicine, nursing, and biostatistics) to participate in translational research training in a multidisciplinary setting with up to 5 years of career development support.

One of the strategic goals of NCATS for the CTSA program is to “develop and foster innovative translational training and a highly skilled, creative, and diverse translational science workforce” [1]. The KL2 program is one mechanism that offers such training. Each CTSA hub is responsible for evaluating its KL2 program, typically using methods such as surveys, focus groups, exit interviews, and alumni follow-up. Measures of scholarly success are predominantly scholarship products—publication output and extramural funding, such as an R01-equivalent award. The NCATS Common Metrics Initiative also created measures for all KL2 programs to report on the number and percentage of total graduates: women, underrepresented minorities, and KL2 scholars who sustain their translational research engagement [2].

All NCATS-funded KL2 programs follow a set of established requirements. For example, 75% of the time of enrolled KL2 scholars is funded by the program, with the exception of 50% for surgeons [3]. All KL2 programs must provide training in rigorous research methodologies aligned with CTR competencies. KL2 programs must provide opportunities that allow scholars to effectively communicate and collaborate across multidisciplinary teams [3,4]. Beyond these requirements, CTSA hubs are encouraged to innovate and tailor their KL2 programs to their specific needs, leading to some variability in KL2 programs. For example, although programs provide 2 years of KL2 funding, some programs provide funding for 3 or more years [5]. Such differences in hub-level KL2 program characteristics present challenges for defining indicators of success and standardizing the evaluation of KL2 program outcomes across CTSA hubs.

CTSA hubs sometimes use bibliometric analysis to evaluate their impact on moving translational research forward. Bibliometrics refers to the use of quantitative and statistical methods to analyze a chosen group of publications [6,7]. Bibliometrics complement other evaluation methods, such as surveys and interviews, by allowing programs to measure tangible and intangible outcomes such as publications and scholarly impact, which otherwise cannot be objectively measured via self-report methods such as surveys or interviews. One study [8] used bibliometrics to analyze publications citing all CTSA hub grants from 2006 to 2016. This study highlighted bibliometric values to assess the impact of research across all CTSAs as measured by interdisciplinary collaboration, influence on other publications, and breadth of scientific fields. Another study [9] used bibliometrics to compare publications citing one of 6 CTSA hubs with measures from three sources: NIH iCite (publicly available), Thomson Reuters (now Clarivate Analytics), and Elsevier. This study identified relevant data sources and standardized analyses for cross-CTSA comparisons. A more recent CTSA hub-specific study [10] implemented advanced bibliometric measures to assess scholarly productivity, citation impact, the scope of research collaboration, and clusters of research topics of publications supported by their CTSA. Furthermore, some CTSA hubs have evaluated the feasibility of different bibliometric approaches to assessing KL2 scholarly productivity and influence compared with other NIH-funded K-awardees [11,12]. These studies illustrated the importance of standardizing the (1) collection of publication data, (2) definition of success and measures for KL2 training outcomes, and (3) use of bibliometric methods to evaluate the scholarly productivity and impact of KL2 scholars across the CTSA consortium.

To our knowledge, the evaluation of KL2 programs across CTSAs is limited, and the use of bibliometrics and follow-on funding is minimal. This study evaluates scholarly productivity using bibliometrics and federal follow-on funding of KL2 scholars across 3 different CTSA hubs to define and assess CTR training success indicators. Such indicators, coupled with evidence of the feasibility of their collection and use, would allow CTR training programs to demonstrate effectiveness on a wide variety of outcomes.


Participants

Our sample included KL2 scholars from 3 CTSA institutions, described as institutions A-C throughout the paper. A description of program-level characteristics is provided in the Multimedia Appendix 1. The 3 institutions were selected as convenience samples based on previous collaborations between the CTSAs. We only included scholars in our analysis if they had at least 5 years of bibliometric data, starting at matriculation (ie, year of entry into the KL2 program). We chose the date range of 2005-2013 to ensure that all scholars met the minimum 5-year data requirement. In other words, scholars beginning their KL2 program in 2013 have 5 years of bibliometric data as of December 2018.

Measures

Overview

We selected bibliometrics and federal follow-on funding because they are objective measures with evidence of reliability and validity and are verifiable. In the following sections we describe various bibliometrics along a continuum of objectivity. Although some metrics, such as number of publications, are objective, other metrics, such as citation impact and collaboration, are quantified representations of construct and are open to interpretation of scale, scope, and reliability.

Bibliometrics

When we selected bibliometric measures and sources for this study, we considered the (1) validity, relevancy, and feasibility of measures, especially the metrics and sources that have been verified and adopted in previous CTSA evaluation studies; (2) coverage of publications of our KL2 scholars in three CTSAs; and (3) availability of bibliometric data sources such as institutional subscriptions and free accessibility. Accordingly, we chose Elsevier SciVal and NIH iCite for the bibliometric measurements. SciVal is a research analytics tool for various bibliometrics gathered from the Elsevier Scopus citation database. Scopus has much broader coverage in biomedical and life sciences than Web of Science [13,14] and has been adopted in multiple bibliometric studies concerning CTSA evaluations [9,10,15]. The 3 CTSA institutions in this study have subscriptions to Scopus or SciVal and have experience in using Elsevier metrics to assist research performance evaluation. In addition, we added the relative citation ratio (RCR) generated by NIH iCite as an additional citation measure. Being freely accessible and increasingly used by CTSA evaluators, RCR is a field-independent metric, measuring the citation impact of an article relative to other NIH-supported research papers produced in the same field during the same timeframe. However, iCite is limited to analyzing only articles indexed in PubMed, whereas SciVal can provide aggregated data from the more comprehensive Scopus database. Extracting data from both sources can help reduce potential bias and provide a more reliable estimate of the quantity and impact of scholarly work. Therefore, we exported bibliometric data for each of the included scholars from both SciVal and iCite to provide evidence of validity and reliability within this sample [16,17].

Data exported from SciVal provide evidence for three metrics of research performance: productivity, impact, and collaboration. Productivity metrics provide an overview of total scholarly output, that is, the number of publications a scholar produces within a specified period. Impact metrics focus on citation counts through raw and calculated variables, accounting for scientific field-weighted or ratio values. We excluded self-citations from the analysis. Collaboration metrics consider all authors in a publication with attention to the affiliated institutions of coauthors. We used iCite data to understand the impact of the publications compared with the average of NIH-funded publications published in the same year and field [18]. Table 1 shows a detailed list of metrics used per domain.

Table 1. Summary of bibliometrics used.
Source, domain, and metricDefinition
SciVal

Productivity 


Scholarly output Number of publications indexed in Scopus

Impact


Citations per publicationAverage number of citations received per publication


PPTPJaNumber of publications in the world’s top journals


FWCIbRatio of citations received relative to the expected average for the field, type, and year

Collaboration


NFWCcCollaboration ratio is computed based on the expected collaboration for that document type, publication year grouping, and subject area assignment
iCite

Impact


Average RCRdCites per year of each paper, normalized to the citations per year received by NIHe-funded papers in the same field and year


Average citations per yearCitations per year received since publication; this is the numerator for the RCR.


Average field citation rateIntrinsic citation rate of this paper’s field is estimated using its co-citation network.


Average NIH percentilePercentile rank of this paper’s RCR compared with all NIH publications.

aPPTPJ: percentage of publications in the top 10th percentile of journals.

bFWCI: field-weighted citation impact.

cNFWC: National Field-weighted Collaboration.

dRCR: relative citation ratio.

eNIH: National Institutes of Health.

Federal Follow-on Funding

Three federal follow-on grant funding measures were also collected: NIH funding at the 5-year, 8-year, and overall time points. These follow-on funding measures include only NIH federal funding received by a scholar as the principal investigator, coprincipal investigator, or coinvestigator. This information can be independently confirmed for all 3 institutions by examining NIH RePORTER (Research Portfolio Online Reporting Tools Expenditures and Results) for NIH funding records [19].

Procedures

This study received institutional review board exemption from 2 institutions, with a third institution determining that this was not human subjects research. Each participating institution developed a list of KL2-scholars awarded since 2005, including gender, race, ethnicity, and highest degree earned for each scholar. We dichotomized race due to sample skewness; however, dichotomization still resulted in unequal group sizes. We excluded ethnicity analysis because of the homogeneity of the sample. We collapsed the categories of degrees to MD and non-MD terminal degrees. Clinical training was the distinction between MD and non-MD degrees. Scholars with both MD and PhD were considered MDs.

Cohorts of scholars were created in SciVal using the year the scholar began their KL2-grant (ie, all 2005 KL2’s were in one cohort) according to the SciVal cohort-instructions. To obtain iCite data, a list of publications associated with each KL2 scholar was retrieved from PubMed and then manually verified. The PubMed identification numbers (PMIDs) of the verified publications were filtered to the 2005-2013 date ranges for this analysis. Next, the validated PMIDs were imported into iCite to download the data for each scholar. All metrics from SciVal and iCite were extracted and entered into an SPSS (IBM; version 25) file within 6 months. Grant data were collected using internal records of federal funding and confirmed through NIH RePORTER [19].

Data Analysis

All statistical analyses were computed using SPSS [20]. Nonparametric tests for independent samples (Kruskal-Wallis and Mann-Whitney U) were used when data were not normally distributed. We conducted descriptive and inferential statistical analyses according to demographic subgroups and institutions to assess the bibliometric results of KL2 scholars. Federal funding data were analyzed using logistic regression to evaluate the relationship between categorical variables (eg, gender, race, degree, and institution) and a series of dichotomized variables representing the presence or absence of federal funding at three time points, as described below.

Data were time-bound to include the year the scholar started their K-award (ie, matriculation) through the completion of 2018. Three time points determined analysis: 5 years and 8 years after matriculation and overall (from the start year of the KL2-award through the end of 2018). These time points are based on established precedents from K08 or K23 evaluations and K to R-award funding trajectories, which peak at 8 years [21,22]. iCite data were not included for the 5-and 8-year time points because, at the time of analysis, no time point–specific data could be produced by iCite and only the overall citation metrics were available.


Sample Characteristics

The total sample consisted of 143 KL2 scholars with relatively equal groups across institutions (Table 2). The overall sample at each institution had a slightly greater number of male scholars. The majority of KL2 scholars in this sample were non-Hispanic White people. Most scholars held terminal MD degrees, with PhDs being the second most common degree.

Table 2. Summary of scholar demographicsa.
DemographicsSample value (N=143)Institution A value (n=50)Institution B value (n=48)Institution C value (n=45)
Gender, n (%)

Male77 (54)b27 (54)25 (52)25 (55)

Female66 (46)23 (46)23 (48)20 (44)
Ethnicity, n (%)

Hispanic or Latino6 (4)4 (8)1 (2)1 (2)

Not Hispanic or Latino136 (95)45 (90)47 (98)44 (98)

Not reported 1 (1)1 (2)0 (0)0 (0)
Race, n (%)

American Indian0 (0)0 (0)0 (0)0 (0)

Asian19 (13)9 (18)1 (2)9 (20)

African American10 (7)3 (6)2 (4)5 (11)

White112 (78)36 (72)45 (94)25 (55)

Not reported1 (1)2 (4)0 (0)0 (0)
Degree, n (%)

MD74 (52)20 (40)29 (60)25 (55)

PhD47 (33)17 (34)15 (31)15 (33.3)

MD and PhD15 (11)7 (14)4 (8)4 (8.9)

Other7 (5)6 (12)0 (0)1 (2.2)

aValues were rounded up using the tenths to the nearest whole number.

bItalicized values indicate the largest value.

Bibliometrics

SciVal

Data from SciVal for all 143 scholars revealed a median of 17 publications or 3.4 publications per year on average at the 5-year time point. Impact metrics showed a midpoint of 26.16 citations per year or an average of 130.8 citations over 5 years. KL2 scholars were cited approximately 7 times more than other researchers in their field. In addition, 44.01% (1558/3540) of the publications were in the top 10th percentile of the journals. In terms of collaboration, KL2 scholars published work with researchers from other US institutions over 2 times more than others in their research fields. Contrary to the overall trends, investigating scholars at the institution level presented some differences (Table 3). The KL2 scholars of Institution C had a significantly higher citation rate per publication than the other institutions (H2=12.35; P=.002). Institution A had a significantly lower rate of nationally field-weighted collaboration than did the other institutions (H2=70.49; P<.001). No other 5-year significant differences were reported.

Data from SciVal for 112 scholars showed an increase in publication rate at the 8-year time point, with scholars from all institutions publishing an average of 3.75 publications per year, up from 3.4 at the 5-year time point. The mean citation rate also slightly increased from 26.16 at the 5-year time point to 26.44 at the 8--year time point. The percentage of publications in the top 10th percentile of journals decreased by less than 1% to 43.34% (2658/6132). Rates of collaboration increased, with KL2 scholars at 8 years publishing with researchers across the nation 3.5 times more than other researchers in their field. Results at the institutional level for the 8-year time point were similar to those at the 5-year time point (Table 3). Institution C scholars had a significantly higher rate of citations per publication (H2=10.12; P=.006), and scholars of Institution A had a significantly lower rate of field-weighted national collaboration (H2=65.08; P<.001; n=112; Institution A: n=37; Institution B: n=37; and Institution C: n=38).

Regarding demographics, outcomes from SciVal metrics reported that male scholars published significantly more work at the 5-year time point than female scholars (U=1114.50; P=.008; Table 4). There were no differences between the scholar race and any reported bibliometric outcomes. Scholars with an MD degree had significantly higher field-weighted citation indices than scholars without an MD degree (U=1202.50; P=.04).

Table 3. Postmatriculation SciVal bibliometric summary (medians reported).
BibliometricValue, median

All institutionsInstitution AInstitution BInstitution C

5-year value (n=143)8-year value (n=112)5-year value (n=143)8-year value (n=112)5-year value (n=143)8-year value (n=112)5-year value (n=143)8-year value (n=112)
FWCIa7.1612.46.7910.26.812.68.4b14.2
Scholarly output173015271526.52240.5
Citations per publication130.8211.593.9161.3109.9174.4174.1288
PPTPJc (%)4443.340.943.23941.948.545.8
NFWCd2.073.60.78653.45.72.94.9

aFWCI: field-weighted citation impact.

bItalicized values indicate the largest values.

cPPTPJ: percentage of publications in the top 10th percentile of journals.

dNFWC: National Field-Weighted Collaboration.

Table 4. SciVal bibliometric outcomes by demographic groups (medians reported).
BibliometricValue, median

GenderRaceDegree

5-year value8-year value5-year value8-year value5-year value8-year value

MaleFemaleMaleFemaleWhitePeople of colorWhitePeople of colorMDNon-MDMDNon-MD
FWCIa8.1b6.3141078111386129
Scholarly output231535261812302920153525
Citations per publication144115200185125167185204144109192193
PPTPJc453943414249404547394436
NFWCd2.21.83.63.62.11.63.92.22.31.342.5

aFWCI: field-weighted citation impact.

bItalicized values indicate the largest value.

cPPTPJ: percentage of publications in the top 10th percentile of journals.

dNFWC: National Field-Weighted Collaboration.

iCite

Data from iCite only provided a comprehensive report from all the years included in this study, as illustrated in Table 5. KL2 publications from all institutions were cited twice as much as other researchers in their fields, earning an average of 4.5 citations per year. The average NIH percentile was 53%, indicating that KL2 publications had an RCR higher than 53% of all NIH-funded publications [17]. At the institutional level, Institution A reported significantly lower results for the average field-weighted citation rate (H2=8.96; P=.01). No other significant results were reported.

Table 5. iCite bibliometric outcomes by institution (medians reported).
BibliometricValue, median

All institutionsInstitution AInstitution BInstitution C
Average RCRa1.671.511.721.78b
Average citations per year3.472.763.843.82
Average field citation rate4.053.794.274.09
Average NIHc percentile52.9151.8654.1752.69

aRCR: relative citation ratio.

bItalicized values indicate the largest value.

cNIH: National Institutes of Health.

Federal Follow-on Funding

Grant analysis of KL2 scholars at all 3 institutions indicated that 44.1% (63/143) of KL2 scholars received federal funding within 5 years postmatriculation. At the 8-year time point, 51.7% (74/143) of scholars had achieved federal funding as a principal investigator, coprincipal investigator, or coinvestigator. Significant differences between institutions were found at the 5-year, 8-year, and overall funding rates using the chi-square test (Table 6). Scholars from Institution B were more likely to have received federal funding than scholars at Institution A or C at the 5-year (χ22=15.28; P<.001) and 8-year (χ22=7.07; P=.03) time points. Investigating federal grant funding by scholar characteristics showed no significant differences between gender, race, or degree at the 5-year or 8-year time points. Male scholars in our sample appeared to receive federal funding at higher rates than females, but this did not reach significance (χ22=3.56; P=.06).

Table 6. Scholars with extramural federal funding by institution.
Time pointAll institutions (N=143), n (%)Institution A (n=50), n (%)Institution B (n=48), n (%)Institution C (n=45), n (%)
5-year63 (44.1)16 (32)31 (64.5a)13 (28.9)
8-year74 (51.7)24 (48)32 (66.7)18 (40)
Overall74 (51.7)24 (48)32 (66.7) 19 (42.2)

aItalicized values indicate the largest value.


Principal Findings

Overall, this study emphasizes the high level of scholarly productivity, impact, collaboration, and funding achieved by KL2 scholars. This study also provides insights into the utility of both bibliometric and federal follow-on funding to evaluate CTR training success, especially for measuring the scholarly work of training participants. Bibliometric data provide a better understanding of the impact of research publications produced by KL2 scholars. Federal funding data demonstrate the extent to which KL2 scholars are receiving subsequent federal funding and therefore successfully sustaining their research.

Bibliometrics

On the basis of previous experience of applying bibliometrics to CTSA performance assessment [8-12], this study adopted a proprietary research analytics tool, SciVal, and a publicly available federal government–developed bibliometric tool, iCite, to investigate the productivity and citation impact of KL2 scholars across 3 institutions. Both SciVal and iCite metrics showed that the examined research publications of KL2 scholars had a more significant citation influence than those published simultaneously by other researchers in the same fields. For instance, SciVal metrics (eg, field-weighted citation impact [FWCI] and percentage of publications in the top 10th percentile of journals) showed that, on average, at 5 years, KL2 scholars were cited almost 7 times more than other researchers in their field. Approximately half of their articles were published in the top 10th percentile of the world's journals, which indicates the distinguishing influence of CTSA-supported KL2 scholars. Similarly, the RCRs generated by iCite disclosed that the KL2 scholars at 3 institutions received almost twice as many citations per year as other researchers in their fields, which is consistent with results reported in a previous study [12]. The difference of citation impact results produced by SciVal (FWCI) versus iCite (RCR) is associated with varied time ranges and citation tracking scopes that these two tools examined. In this study, FWCI results were generated at the 5- and 8-year time point, whereas the RCR of an institution was the average of all included publications from 2005 to 2019. In addition, the Scopus database, from which SciVal data are derived, is one of the largest citation databases in the world and covers the MEDLINE database, which is the primary component of PubMed in addition to thousands of international publishers. Consequently, the citation counts and FWCIs provided by SciVal or Scopus are often higher than those generated by iCite, which only tracks citation counts within the NIH Open Citation Collection, including MEDLINE, PubMed Central, and CrossRef [23].

Furthermore, KL2-scholars demonstrated significant collaborations with researchers across the United States, with coauthorship occurring 2 times more than others in their research fields at the 5-year time point and 3.6 times more at the 8-year time point, as measured by the National Field-Weighted Collaboration score generated by SciVal. Therefore, our results confirm the feasibility of applying bibliometrics to assess scholarly work supported by CTSA programs and corroborate the effectiveness of three CTSA programs in supporting KL2 scholars for translational research [8,9].

Nevertheless, a subanalysis highlighted a few critical demographic differences between scholars across institutions. Male scholars published significantly more than female scholars did at 5 years postmatriculation; however, this difference was not observed at the 8-year time point. Previous research confirms the difference in publications by males, but limited research has investigated any long-term differences [24]. Our results suggest that differences over time may be a critical point where female scholars reduce this gap, which may arise from a variety of personal commitments that impact the pace of the career of a female scientist. At 8 years postmatriculation, scholars with an MD degree had a significantly higher FWCI than those without an MD degree. These demographic differences should be further studied to better understand the role the scientific field plays in supporting scholarly work across different scientific areas.

Similarly, bibliometrics brought institutional differences to our attention (Multimedia Appendix 1). We found that KL2 scholars at Institution C had a significantly higher rate of citations per publication. In comparison, Institution A had a significantly lower rate of National Field-Weighted Collaboration than the other 2 institutions. These cross-institutional differences emphasize the need to invest in the standardization of measures (eg, bibliometrics), improving the ability to evaluate scholarly success across the nation. In previous research, CTSA hubs reported various techniques for reporting and tracking publications [8,9,25]. In addition, differences in publication characteristics have been shown to impact bibliometric outcomes (eg, reviews are usually cited more than original articles; open-accessible articles are cited more than nonopen accessible ones), such as in journal coverage of bibliographic databases, and the size and establishing time of CTSA programs [9,10,25]. Therefore, although it is feasible to use bibliometrics to analyze the scholarly output and influence of multiple CTSA programs, there are complications in applying citation metrics, interpreting results, and benchmarking the performance of multi-institutional programs. Future research should consider the role of the program-level characteristics outlined in the Multimedia Appendix 1.

Federal Follow-on Funding

Our analysis reveals that federal funding award rates for KL2 scholars are higher than the national average of 20.2% [26]. Previous research reported a 34% R01 funding rate for KL2 scholars 6 years postmatriculation [27]. Our study included R01 equivalent awards, but nonetheless suggests that the federal funding rate for KL2 scholars in this sample (63/143, 44.1% at 5 years) may be slightly higher than other institutions. The data also show that male KL2 scholars are more likely to receive federal funding than their female counterparts; however, this was not statistically significant. Previous studies examining gender differences in obtaining grant funding have been mixed. There were mitigating effects by both the type of training and the length of time following training [21,22,28]. In our analysis, we investigated the attainment of any NIH funding, and thus, we might have picked up additional funding mechanisms not examined in previous research. To our knowledge, none of the studies, including ours, has examined funding from foundations or other sources. This exclusion is a limitation and potential area of further research, especially to better understand the underlying factors associated with male KL2 scholars having a higher likelihood of receiving follow-on funding.

Limitations

The limitations of this study include programmatic differences between the 3 institutions, differences in scholarly scientific fields, and grant data. We intended to investigate the scholarly success of a multi-institutional sample of KL2 scholars and to identify appropriate and feasible methods that could be used across institutions. However, the analysis of these measures revealed that performance on certain metrics could be linked to institutional characteristics. Perhaps the most considerable difference between these three KL2 programs is the length of KL2 funded training time. Each institution had a different length of funding, ranging from 2 to 4 years, provided to scholars. Other programmatic distinctions, such as variations in grant writing support, also exist. Future research should identify which program characteristics are related to the outcomes studied here. Other possible analyses include investigating the impact of scholarly field and degree on outcomes given the differences between MDs and PhDs that we highlighted in our analyses.

Conclusions

This study emphasized the use of bibliometrics and federal follow-on funding as necessary evaluation measures for assessing scholarly productivity, impact, collaboration, and funding achieved by KL2 scholars. We have shown that evaluators can use these metrics to evaluate CTR training programs that focus on scholarly productivity as a critical outcome. These methods can be used to complement existing evaluation strategies to demonstrate program performance. The findings of this study highlight the need to better understand barriers and facilitators of scholarly productivity. Institutions should consider similar subanalyses within their evaluations to explore equity within their programs. In addition, there is a need to investigate the impact of programmatic components and best practices that yield high follow-on funding rates. Program-level goals within and among institutions influence funding outcomes, scholarly productivity, and collaboration. Identifying these differences will enhance the specificity of KL2 program evaluations. CTR training programs, such as CTSA KL2 programs, can benefit from the findings of this and future analysis as they continuously adapt their program strategy to support the early-stage career development of CTR investigators.

Acknowledgments

This project was supported by the Clinical and Translational Science Collaborative at Case Western Reserve University, the North Carolina Translational and Clinical Sciences Institute at the University of North Carolina—Chapel Hill, and the Center for Clinical and Translational Science at the Mayo Clinic, which is funded by the National Institutes of Health National Center for Advancing Translational Science Clinical and Translational Science Award (UL1TR002548; UL1TR002489; KL2 TR002379). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Conflicts of Interest

None declared.

Multimedia Appendix 1

KL2 program comparison across 3 Clinical and Translational Science Award hubs.

XLSX File (Microsoft Excel File), 75 KB

  1. Strategic goal 3: develop and foster innovative translational training and a highly skilled, creative and diverse translational science workforce internet. National Center for Advancing Translational Sciences.   URL: https://ncats.nih.gov/strategicplan/goal3 [accessed 2021-08-18]
  2. Common metrics initiative. Center for Leading Innovation & Collaboration.   URL: https://clic-ctsa.org/common-metrics-initiative/ [accessed 2021-08-18]
  3. Clinical and translational science award (U54 Clinical Trial Optional). National Institutes of Health.   URL: https://grants.nih.gov/grants/guide/pa-files/par-18-940.html [accessed 2021-08-18]
  4. Calvin-Naylor NA, Jones CT, Wartak MM, Blackwell K, Davis JM, Divecha R, et al. Education and training of clinical and translational study investigators and research coordinators: a competency-based approach. J Clin Transl Sci 2017 Feb;1(1):16-25 [FREE Full text] [CrossRef] [Medline]
  5. Sorkness CA, Scholl L, Fair AM, Umans JG. KL2 mentored career development programs at clinical and translational science award hubs: practices and outcomes. J Clin Transl Sci 2019 Dec 26;4(1):43-52 [FREE Full text] [CrossRef] [Medline]
  6. Pritchard A. Statistical bibliography or bibliometrics? J Doc 1969 Jan;25(4):348-349.
  7. Ellegaard O, Wallin JA. The bibliometric analysis of scholarly production: how great is the impact? Scientometrics 2015;105(3):1809-1831. [CrossRef] [Medline]
  8. Llewellyn N, Carter DR, DiazGranados D, Pelfrey C, Rollins L, Nehl EJ. Scope, influence, and interdisciplinary collaboration: the publication portfolio of the NIH Clinical and Translational Science Awards (CTSA) program from 2006 through 2017. Eval Health Prof 2020 Sep;43(3):169-179. [CrossRef] [Medline]
  9. Schneider M, Kane CM, Rainwater J, Guerrero L, Tong G, Desai SR, et al. Feasibility of common bibliometrics in evaluating translational science. J Clin Transl Sci 2017 Feb;1(1):45-52 [FREE Full text] [CrossRef] [Medline]
  10. Yu F, Van AA, Patel T, Mani N, Carnegie A, Corbie-Smith GM, et al. Bibliometrics approach to evaluating the research impact of CTSAs: a pilot study. J Clin Transl Sci 2020 Apr 02;4(4):336-344 [FREE Full text] [CrossRef] [Medline]
  11. Amory JK, Louden DK, McKinney C, Rich J, Long-Genovese S, Disis ML. Scholarly productivity and professional advancement of junior researchers receiving KL2, K23, or K08 awards at a large public research institution. J Clin Transl Sci 2017 Apr;1(2):140-143 [FREE Full text] [CrossRef] [Medline]
  12. Sayavedra N, Hogle JA, Moberg DP. Using publication data to evaluate a Clinical and Translational Science Award (CTSA) career development program: early outcomes from KL2 scholars. J Clin Transl Sci 2017 Dec;1(6):352-360 [FREE Full text] [CrossRef] [Medline]
  13. Falagas ME, Pitsouni EI, Malietzis GA, Pappas G. Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses. FASEB J 2008 Feb;22(2):338-342. [CrossRef] [Medline]
  14. Pranckutė R. Web of Science (WoS) and Scopus: the titans of bibliographic information in today’s academic world. Publications 2021 Mar 12;9(1):12. [CrossRef]
  15. Luke DA, Carothers BJ, Dhand A, Bell RA, Moreland-Russell S, Sarli CC, et al. Breaking down silos: mapping growth of cross-disciplinary collaboration in a translational science initiative. Clin Transl Sci 2015 Apr;8(2):143-149 [FREE Full text] [CrossRef] [Medline]
  16. Dresbeck R. SciVal. J Med Libr Assoc 2015 Jul;103(3):164-166. [CrossRef]
  17. New analysis. National Institutes of Health.   URL: https://icite.od.nih.gov/analysis [accessed 2021-08-18]
  18. Hutchins B, Yuan X, Anderson JM, Santangelo GM. Relative Citation Ratio (RCR): a new metric that uses citation rates to measure influence at the article level. PLoS Biol 2016 Sep 6;14(9):e1002541 [FREE Full text] [CrossRef] [Medline]
  19. RePORTER. National Institutes of Health.   URL: https://reporter.nih.gov/ [accessed 2021-08-18]
  20. IBM SPSS statistics. IBM.   URL: https://www.ibm.com/products/spss-statistics [accessed 2021-08-18]
  21. Jagsi R, Motomura AR, Griffith KA, Rangarajan S, Ubel PA. Sex differences in attainment of independent funding by career development awardees. Ann Intern Med 2009 Dec 01;151(11):804-811. [CrossRef] [Medline]
  22. Kalyani RR, Yeh H, Clark JM, Weisfeldt ML, Choi T, MacDonald SM. Sex differences among career development awardees in the attainment of independent research funding in a department of medicine. J Womens Health (Larchmt) 2015 Nov;24(11):933-939 [FREE Full text] [CrossRef] [Medline]
  23. Hutchins BI, Baker KL, Davis MT, Diwersy MA, Haque E, Harriman RM, et al. The NIH open citation collection: a public access, broad coverage resource. PLoS Biol 2019 Oct 10;17(10):e3000385 [FREE Full text] [CrossRef] [Medline]
  24. Posen M, Templer DI, Forward V, Stokes S, Stephens J. Publication rates of male and female academic clinical psychologists in California. Psychol Rep 2005 Dec;97(3):898-902. [CrossRef] [Medline]
  25. Llewellyn N, Carter DR, Rollins L, Nehl EJ. Charting the publication and citation impact of the NIH Clinical and Translational Science Awards (CTSA) Program From 2006 Through 2016. Acad Med 2018 Aug;93(8):1162-1170 [FREE Full text] [CrossRef] [Medline]
  26. Research project success rates by NIH Institute for 2018. NIH Research Portfolio Online Reporting Tools (RePORT).   URL: https://report.nih.gov/success_rates/success_byic.cfm [accessed 2021-08-24]
  27. Schneider M, Guerrero L, Jones LB, Tong G, Ireland C, Dumbauld J, et al. Developing the Translational Research Workforce: A Pilot Study of Common Metrics for Evaluating the Clinical and Translational Award KL2 Program. Clin Transl Sci 2015 Dec;8(6):662-667 [FREE Full text] [CrossRef] [Medline]
  28. Pohlhaus JR, Jiang H, Wagner RM, Schaffer WT, Pinn VW. Sex differences in application, success, and funding rates for NIH extramural programs. Acad Med 2011 Jun;86(6):759-767 [FREE Full text] [CrossRef] [Medline]


CTR: clinical and translational research
CTSA: Clinical and Translational Science Award
FWCI: field-weighted citation impact
NCATS: National Center for Advancing Translational Science
NIH: National Institutes of Health
RCR: relative citation ratio
RePORTER: Research Portfolio Online Reporting Tools Expenditures and Results


Edited by R Kukafka; submitted 30.03.21; peer-reviewed by N Llewellyn, PKH Mo, J Du; comments to author 10.05.21; revised version received 04.07.21; accepted 27.07.21; published 29.09.21

Copyright

©Kelli Qua, Fei Yu, Tanha Patel, Gaurav Dave, Katherine Cornelius, Clara M Pelfrey. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 29.09.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.