Published on in Vol 27 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/75361, first published .
Prior Authorization of Medication and Its Influence on Provider Behavior: Latent Class Analysis

Prior Authorization of Medication and Its Influence on Provider Behavior: Latent Class Analysis

Prior Authorization of Medication and Its Influence on Provider Behavior: Latent Class Analysis

1Department of Psychiatry, University of Nebraska Medical Center, 985575 Nebraska Medical Center, Omaha, United States

2Lars Research Institute, Sun City, United States

3Department of Biostatistics, University of Nebraska Medical Center, Omaha, NE, United States

Corresponding Author:

Stephen Salzbrenner, MD


Background: Insurance companies frequently require prior authorization (PA) for medication prescriptions to ensure quality control and safety. The added layer of scrutiny can contribute to provider dissatisfaction and has been associated with adverse patient outcomes. Health care providers have changed prescribing behaviors to avoid PA. Understanding factors contributing to this phenomenon can facilitate systemic change and better patient care.

Objective: The objectives of this study are to identify unique unobserved subgroups of prescribers with similar PA-related behaviors using a finite mixture modeling approach; characterize subgroup membership by important covariates; and examine the influence of subgroup membership on 3 relevant prescribing outcomes.

Methods: A cross-sectional, web-based, nationwide survey of 1173 prescribers was oversampled for psychiatry in support of developing a software-as-a-solution to facilitate PA. Latent class analysis included 12 indicators assessing the degree of PA involvement, provider-insurance communication, and the methods of obtaining or avoiding PA. Covariates included age, gender, race, provider role, specialty, number of prescribers, and patient load. Three clinical decision outcomes included prescribing medication other than initially preferred due to PA delays, avoiding newer medications due to anticipated need for PA, and modifying a diagnosis to obtain PA.

Results: In total, 1147 prescribers responded with 1144 usable surveys (age, median 50.003 [range 25.00, 72.00] years; 569 (49.74%) females; 67.13% White; 44.84% psychiatrists). In total, 4 unique classes were obtained based on 12 indicators assessing PA-related activities. Classes included a high PA denial class (291 [25.15%]), a Low Volume PA (178 [15.93%]), a class denoted by Problematic Communication Issues with insurers (227 [19.96%]), and a Low Volume PA Class with Problematic Experiences (446 [38.97%]). Only 3 of the 7 covariates (age, specialty type, and patient load) provided additional means to characterize class membership. The observation that certain demographics (race and gender) and provider characteristics (specialty) may not be informative has policy implications and can inform means to improve provider-insurer communication. The largest class reporting problematic PA experiences had significantly higher mean levels for changing their prescribing and diagnostic behaviors than the remaining classes.

Conclusions: Providers are not homogeneous regarding their experience with PA and insurance companies. It is, therefore, important to recognize subtle behavioral differences and find ways to accommodate the PA process to their unique needs. This will facilitate the appropriate implementation of PA by insurance companies. Providers can then avoid the need to alter medications, change diagnoses, or resist prescribing newer, effective medications that may require lengthy clinical documentation.

J Med Internet Res 2025;27:e75361

doi:10.2196/75361

Keywords



Prior authorization (PA) is used by health insurers to manage access to costly medications and ensure their safe, effective, and value-based use [1]. However, PA can negatively impact workflow, patient care, and provider satisfaction. An American Medical Association survey reported that providers spend a mean of 12 hours on PAs per week [2]. Moreover, 95% of physicians reported that PA had a somewhat or significant negative impact on clinical outcomes, including delayed access to care, treatment abandonment by patients, and serious adverse health events. Prescribers reported that 31% (310/1000) of PAs are often or always denied. 88% (880/1000) of physicians reported that PA led to higher overall utilization of health care resources, including ineffective initial treatment, additional office visits, immediate care and emergency room visits, and hospitalizations. To address these issues, 40% (400/1000) of physicians have staff who work exclusively on PA.

The 2019 ePA National Adoption Scorecard by CoverMyMeds noted that the type of medical specialty can also contribute to PA burden [3]. More in-depth qualitative studies reinforce provider burden due to extensive paperwork and inconsistent PA requirements among health plans [4,5]. Given these consequences and the burden of PA, providers, pharmacists, policy makers, and other stakeholders have supported efforts to limit, standardize, and streamline PA processes [6-11].

Although there is a body of evidence on the benefits and unintended consequences of PA, we could only find 1 published study that examined the effect of PA on providers’ clinical decision-making. This involved a survey of 326 psychiatrists in which a majority reported at least occasionally using tactics including diagnosis modification or falsification of previous medication trials to obtain PA [12]. An additional two-thirds refrained at least occasionally from prescribing preferred medications due to an actual PA requirement or expectation of one. This gap in the literature prompted us to conduct a nationwide survey of ~1200 prescribers representing all but 7 states, which examined clinical practices such as modifying diagnoses, avoiding evidence-based medications, or avoiding prescribing newer medications in relation to various PA burdens and clinical factors [13]. The results of this study as well as survey data from various sources [5,10,11,13] reveal that not all providers feel the same about the PA process, nor do they modify their clinical practices in the same way to avoid problems with PA. When it comes to directly interfacing with health insurers over the issue of PA, one size does not fit all, suggesting there may be manifold provider experiences. Subgroups of providers may exist differentiated based on the tenor of their interactions with PA, owing to differences in the volume of PA, patient load, the quality of provider-insurer interactions, and insurers’ demands to provide support for a particular prescription or course of treatment. To our knowledge, no study has yet examined subgroups of providers who have unique experiences revolving around PA nor determined whether these subgroups differ in prescribing and diagnostic behaviors.

In the present study, we rely on latent class analysis (LCA), a mixture model approach to determine whether providers form qualitatively distinct subgroups (classes) based on their day-to-day interactions with insurers over PA. The classes differ qualitatively rather than quantitatively because the focus is on “response patterns,” not distributional behaviors like a measure of central tendency (eg, mean) would represent. Mixture models are part of a broad class of person-centered analytic techniques that examine relations between people rather than between variables, as with a variable-centered approach (ie, correlation or regression). It is considered a categorical analogue to factor analysis where the underlying latent factor is categorical (and has a multinomial distribution rather than a continuous distribution), and the indicators for the categorical latent factor are themselves also categorical. The different levels of the categorical latent factor correspond to unique (mutually exclusive and exhaustive) “subgroups” that share behavioral similarity [14,15]. To illustrate, if there are 2 survey questions, each with response formats of “yes” and “no,” there would be 22 possible response patterns (YY, YN, NY, and NN). When there are 8 survey questions, there are 28 or 256 possible response patterns, which makes it a bit trickier for the naked eye to detect the composition of unique classes. Some type of assignment process is needed that can accurately predict the different response patterns based on the empirical data. This is where LCA can discern meaningful patterns in the data based on probability theorems using a multiway contingency table. Individuals are assigned to their respective class or subgroup based on estimated posterior probabilities using the joint marginal distributions of survey items. There is a margin of error in the assignment process as class membership cannot be perfectly predicted for any individual (perfect prediction of class membership would create a nominal observed variable like gender or race). Once mutually exclusive subgroups were obtained, we addressed whether they can be further characterized by demographics and other relevant covariates. Following this procedure, we modeled the relationships between class membership and 3 measures of clinical decision-making that reflect provider behaviors associated with PA experience. As explained below, this analysis provides insight into clinicians’ diagnostic and prescribing behavior and whether it differs based on their unique class membership.


Recruitment

A 58-item survey was administered in October 2020 using the Qualtrics platform. Invitation emails with a unique hyperlink were sent to ~100,000 licensed providers with emails drawn from a curated, nationwide list. The study oversampled psychiatrists to address PA in mental health care settings. The survey took ~10 minutes to complete (X-=3.73). A handful of surveys had to be discarded because providers started the web-based survey but failed to produce sufficient usable data (97.8% usable with 98.11% survey completion). The response rate for the survey was 1.2%. Additional details of survey administration and sampling procedures can be found in the study by Salzbrenner et al [13,16].

Measures

We used a total of 12 latent class indicators to model subgroup membership. These included number of PAs completed in a week, number of hours spent on PA, length of time waiting for PA decisions from health plans (past week), length of time to complete PAs, percentage of medication requests approved upon appeal, challenges associated with identifying appropriate step therapy requirements prior to prescribing medication, needing to send additional clinical documentation, not being notified by the insurer of a medication approval, not being notified of a medication denial, and being denied PA because the request was missing specific adverse effects of past medication, because of dosing issues, and because of formulation issues. Collectively, these 12 measures capture the providers’ degree of engagement with PA, challenges associated with PA (ie, barriers and obstacles), and provider-insurer communication issues revolving around PA.

All indicators were dichotomized to 0/1, where “1” indicates heavy involvement in PA and numerous challenges. Support for dichotomization is provided when the goal is to acknowledge that a provider’s experience with PA has occurred (yes or no), rather than modeling distributional behavior with central moments [17,18]. Covariates in the model include provider characteristics (age, race, and gender), type of provider (DO/MD vs nurse practitioner [NP]/physician assistant), provider subspecialty (psychiatrist vs all others), and practice characteristics (active patient load and the number of providers that can prescribe medications). Three continuous measures were modeled as “distal outcomes.” These included, “In what percentage of cases do you prescribe a different medication than initially planned due to prior authorization delays?” (ranging from 1 “less than 10%” to 5 “>50%”); “How often do you avoid prescribing newer medications due to anticipated difficulties with prior authorization, even if you feel patients meet evidence-based guidelines for their use?” (ranging from 1 “very rarely” to 5 “extremely often”); and “How often have you modified a diagnosis to obtain a prior authorization?” (ranging from 1 “rarely” to 5 “extremely often”).

The LCA analyses were conducted using Mplus statistical software [19]. Imputation procedures to correct missing data for the covariates were conducted using the MICE procedure in R [20,21]. This is a fully conditioned imputation using predictive mean matching, which considers the distributional characteristics of each missing variable in a multivariate framework [22]. We used 20 imputations, which is sufficient to obtain unbiased parameter estimates [23].

We first tested LCA models with 2-8 classes. Selection of the best fitting model was based on the Akaike information criterion [24], Bayesian information criterion [25], entropy [26], and the log-likelihood statistical fit index (LL). These statistics provide a means to gauge whether a model with k–1 classes vs k classes is superior in fit. As more classes are extracted, there should be a modicum of shrinkage in the information criteria. The LL statistic reflects the likelihood of observing the empirical data given the set of parameter estimates (the logarithm of the LL is used so that higher values closer to 0 indicate better fit). Entropy (ranging from 0 to 1) is a standardized measure that reflects the chaos of a model, with values closer to 1 denoting better classification certainty. Conceptually, we looked for evidence of clear class separation with distinct response patterns for the different classes (ie, class enumeration) [27]. We also want to avoid small or sparse cells (<5%) that may not generalize or replicate [28].

Following derivation of the class structure, we covariate adjusted the model using the R3STEP procedure available in the Mplus software program [29] (see Supplement 1 in Multimedia Appendix 1 for an explanation of how this procedure works). We then modeled relations between the class structure and 3 distal outcomes using the Bolck, Croon, and Hagenaars (BCH) procedure available in the Mplus statistical program [30] (see Supplement 2 in Multimedia Appendix 1 for more about the BCH procedure). Mixture model and subsequent multinomial logistic model analyses were conducted with weights to adjust for non-response. In total, 5 auxiliary variables were used to compute weights including sex, age, practitioner role (MD/DO vs NP and physician assistant), and specialty type (available for both respondents and nonrespondents). Weights were obtained using iterative logistic regression predicting presence in the sample vs the population (delimiting sample data from the population file to avoid overstating presence). This was done to approximate population values and implemented using propensity weighting strategies to make the sample distributions match to the total sample (ie, the known population distribution). This generates a response probability for each auxiliary measure. A bias statistic was estimated as the difference in parameters between the expected value (based on the population of providers) and the sample value (regression coefficients and SEs). The results were virtually identical and thus for ease of interpretation we only report unweighted results in this paper.

Statistical Analysis

We used a Monte Carlo simulation to estimate power with the LCA analyses [31]. With a finite mixture model the question of power revolves around having sufficient sample to extract the right number of classes [27]. Computing power for a mixture model is not as straightforward as with regression or factor analysis. This is because traditional power with precise parameter estimates cannot be used given there are boundary conditions for the parameter estimates (item response probabilities [IRPs] and latent class prevalence are between 0 and 1 and the number of classes is indeterminate). Sample size estimation and power considerations with mixture models can be determined using a Monte Carlo simulation. We specified a model with 5 covariates and up to 5 classes, varying the thresholds (logits) for class composition. The simulation model used maximum likelihood estimation using an expectation maximization algorithm with 10,000 replications (the sample size for the study with a goal of gaining stability in the parameter values) and averaging parameter values across these samples.

The study has sufficient power (≥.80) to obtain adequate coverage (the proportion of replications for which the 95% CI contains the true parameter value), with low levels of parameter bias (computed as the simulated parameter value averaged over the replications–population parameter value/population parameter value and not exceeding 10% for parameter bias and 5% for standard error bias). In all cases power is interpreted as the proportion of replications in which the null hypothesis stating the parameter is zero can be rejected at the .05 level of significance (ie, the probability of rejecting the null when it is false).

Ethical Considerations

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study. The study received Institutional Review Board approval from the University of Nebraska Medical Center (IRB # 00000672 Protocol # 423‐19-EP). The providers received a US $10 gift card upon completion.


Participant Characteristics

The sample was 48.9% (574/1173) female with a mean age of 50.5 (SD 12.9) years. A majority were MD/DO providers (76%) with a smaller percentage NPs (14%) or physician assistants (10%). A majority of the sample was White (67%), followed by Asian (18%) and 4% identified as non-White Hispanic. The largest proportion of respondents were in Psychiatry per study design (44.9%), followed by Internal Medicine (18%), Dermatology (13.1%), Gastroenterology (8.0%), Neurology (5.9%), Oncology (5.8%), and Rheumatology (4.4%).

Table S1 in Multimedia Appendix 2 contains additional sample information, including comparisons based on gender, race, and practice subspecialty. The largest share of providers (44%) worked in practices with less than 5 prescribers, with 23.5% having between 5 and 10, and another 18% having 20 or more (the remaining percentages were much smaller). Active patient load varied considerably, with the largest number (48%) having over 200 patients, while remaining providers were equally split among smaller practices (<25, 11%; 25‐50, 13%; 51‐100, 13%; 101‐200, 14%). The size of practices varied considerably, with the majority (67%) having less than 5 advanced practice providers, 18% having between 5 and 10, and the rest being much smaller practices ranging from 2% having 16%‐20% to 6.5% having 20 or more providers.

LCA Results

Table S2 in Multimedia Appendix 3 contains the model fit indices corresponding to the 2‐8 class LCA models. Upon careful inspection, the 4-class model provided the best fit, noted by shrinkage in the Akaike information criterion, Bayesian information criterion, and LL statistic with the progressive extraction of classes. Relative entropy was less helpful in determining which model to choose, as the values fluctuated up and down with the extraction of additional classes. We further inspected the pattern of IRPs for all the models, looking for any distinguishing features of class membership and the latent class prevalence for the different classes within each model. The goal is to select the most parsimonious model that most efficiently captures the underlying behaviors of the sample participants while simultaneously obtaining the best class enumeration based on the unique response patterns.

Table 1 shows the IRPs for the 4-class model and should be read in conjunction with Figure 1, which graphically portrays the IRPs. The IRPs indicate the probability of endorsing an item conditional on class membership. Class 1 (25.44%) consisted of providers who endorsed lengthy waiting times for PA decisions from health plans (ρ=.794), a high percentage of denied medications approved upon appeal (ρ=.608), the need to send additional documentation (ρ=.750), and PA denial because of dosing (ρ=.788) or formulation issues (ρ=.651). Considering this, we labeled this class “High Denial PA.” Class 2 (15.58%) was distinguished because they only endorsed 1 item close to the .6 threshold for excessive waiting time for PA (ρ=.599). Given that members of this class did not endorse any other items above the .6 threshold, we labeled it “Low Volume PA.” Class 3 (19.91%) had 4 items above the critical .6 threshold, including long average wait for PA (ρ=.826), challenges with step therapy (ρ=.595), which is reasonably close to the .6 threshold, having to send additional clinical documentation (ρ=.754), not being notified of medication approval (ρ=.877), and not being notified of medication denial (ρ=.932). We labeled this class “Problematic Communication Issues.” Class 4 (39.06%) was distinguished by the fact that its members endorsed almost all indicators except for the number of PAs completed in a week (ρ=.474) and hours spent personally on PA (ρ=.066). In the case of the latter parameter, it means that members of this class did not spend a lot of hours working with PA. The remaining indicators were highly endorsed (avg. ρ=.840) and as a result, we labeled this class “Problematic PA Experiences.” It is notable that class 4 endorsed significant workflow burden despite the fact that members of this class did not spend significant time per week on PA. This could be reflective of administrative delegation.

Table 1. Item response probabilities for the 4-class model.
ItemLatent class
1a2b3c4d
Class prevalence25.44%15.58%19.91%39.06%
# prior authorizations completed in a week0.2410.2170.2090.474
# hours personally spent on prior authorization (PA) per week0.0140.0120.0070.066
Average wait for PA decision from health plan0.794e0.5990.826e0.863e
Average time PA completion or submission0.420.3640.4160.666e
Percentage of denied medication requests approved on appeal0.608e0.5220.5690.662e
Challenge to identify appropriate step therapy requirements0.5620.320.5950.731e
Necessary to send additional clinical documentation for medication0.75e0.4150.754e0.94e
Not notified of medication approval0.1780.1250.877e0.93e
You are not notified of medication denial0.0000.0230.9320.876e
Deny PA request missing adverse effects of past medications0.4610.1520.4380.847e
Deny PA request because of dosing issues0.788e0.2040.4070.955e
Deny PA request because of formulation issues0.651e0.0020.2770.932e

aClass labels: Class 1=High Denial PA.

bClass 2=Low Volume PA.

cClass 3=Problematic Communication Issue.

dClass 4=Problematic PA Experiences.

eThe numbers represent probabilities exceeding .600 (ie, 60% endorsement of some type of issue with PA).

Figure 1. 4 Class latent class analysis (LCA) model. PA: prior authorization.

Table 2 contains the results of the covariate-adjusted models including the univariate models (upper portion) and multivariate (lower portion) multinomial logistic regression models. The univariate model determines whether a covariate is significantly related to class membership and can be used to detect evidence of suppression in the multivariate model. Only 3 of the 7 covariates, including age, specialty, and patient load (in both the univariate and multivariate models), were significantly related to class membership. In the adjusted models, older providers were 18% more likely to be members of the High Denial PA class compared to the Problematic PA Experiences reference class. Members of the Low Volume PA class were 2% more likely to be older compared to the reference class. Members of the High Denial PA class were over 2 times more likely to be psychiatrists compared to the reference class. Members of the Low Volume PA class as well as those in the Problematic Communication Issues class were less likely to have a high patient load (ORs=0.49 and 0.54, respectively) compared to the reference class 4.

Table 2. Multinomial logistic regression predicting class membership.
Latent classa
1b2c3d4e
OR (95% CI)P valueOR (95% CI)P valueOR (95% CI)P value
Prevalence25.44%15.58%19.91%39.06%
Unadjusted ORf
 Age1.023 (1.007, 1.039).0051.02 (1.001, 1.039).0380.998 (0.979, 1.017).797Ref.
 Sexg0.812 (0.557, 1.183).2780.735 (0.474, 1.139).1680.903 (0.582, 1.4).648Ref.
 Whiteh0.995 (0.655, 1.51).980.818 (0.503, 1.331).4181.116 (0.677, 1.838).668Ref.
 Specialtyi2.252 (1.532, 3.311)<.0011.132 (0.714, 1.793).5981.28 (0.824, 1.988).271Ref.
 Provider rolej1.421 (0.913, 2.211).1191.627 (0.961, 2.754).071.22 (0.742, 2.005).433Ref.
 Prov_Rxk0.795 (0.548, 1.152).2260.738 (0.481, 1.131).1631.158 (0.743, 1.805).518Ref.
 Pt_loadl0.894 (0.549, 1.455).6520.51 (0.307, 0.847).0090.545 (0.33, 0.901).018Ref.
Adjusted ORm
 Age1.018 (1.001, 1.036).0391.021 (1, 1.042).0450.997 (0.977, 1.018).797Ref.
 Sex0.972 (0.639, 1.478).8940.889 (0.543, 1.456).6410.952 (0.582, 1.558).846Ref.
 White0.873 (0.56, 1.36).5490.755 (0.447, 1.275).2931.213 (0.697, 2.11).495Ref.
 Specialty2.069 (1.392, 3.076)<.0010.943 (0.577, 1.542).8161.243 (0.77, 2.006).373Ref.
 Provider role1.167 (0.709, 1.923).5431.531 (0.847, 2.768).1591.181 (0.641, 2.176).594Ref.
 Prov_Rx0.97 (0.654, 1.436).8770.782 (0.495, 1.235).2911.216 (0.745, 1.983).434Ref.
 Pt_load0.909 (0.546, 1.513).7130.49 (0.292, 0.824).0070.537 (0.324, 0.891).016Ref.

aBased on estimated posterior probabilities [19].

bClass labels: Class 1=High Denial.

cClass 2=Low Volume PA.

dClass 3=Problematic Communication Issues.

eClass 4=Problematic PA Experiences.

fCovariates entered one at a time.

gReference class for each covariate is 0: sex (M=0, F=1).

hWhite (Other=0, White=1).

iSpecialty (Other=0, Psychiatry=1),

jProvider role (Other=0, DO/MD=1).

kProviders who write Rx (Other=0, # of providers ≥5 =1).

lPatient load (Other=0, >50 patients=1).

mCovariates entered as a block [19].

Distal Outcomes

With the BCH procedure, individuals are assigned to their most likely class (see Multimedia Appendix 1 for more on this procedure), creating a nominal variable that can be used for subsequent variable-centered analyses. Tables 3 and 4 show the results of this procedure, including the estimated means for each class (Table 3) and the pairwise comparisons of intercepts between classes (Table 4). The stepwise modeling procedure included contrasting intercepts when only class membership is covariate-adjusted (controlling for unique characteristics of individuals within-class) and then covariate-adjusting the distal outcomes. These adjustments avoid spurious findings when characteristics associated with class membership influence the outcome indirectly or directly influence the outcomes. Of the 18 pairwise comparisons, 14 were significant and only 3 would be eliminated with a Bonferroni-type adjustment for multiple comparisons. A positive mean difference indicates the first class had a larger mean. Overall, the Problematic PA Experiences Class 4 had significantly higher means for altering clinical decision-making because of PA issues compared to the remaining 3 classes. This held for all 3 distal outcomes (the mean differences were all negative). Class 2 (Low Volume PA), on the other hand, which was characterized by the lowest endorsement of PA problems, had much lower means than the remaining classes for all 3 clinical decision-making outcomes.

Table 3. The estimated distal outcomes per classa.
OutcomeLatent classMeanSE
Q14bC1c3.2740.278
C2d2.4520.275
C3e3.360.27
C4f3.7120.257
Q15gC13.1020.231
C22.6020.249
C33.220.223
C43.5270.218
Q16hC12.0450.201
C21.8350.2
C32.1730.197
C42.5380.191

aAll BCH models controlled for covariates.

bLABELS: Q14=Prescribe a different medication due to prior authorization (PA) delays.

cClass 1=High Denial PA.

dClass 2=Low Volume PA.

eClass 3=Problematic Communication Issues.

fClass 4=Problematic PA Experiences.

gQ15=Avoid prescribing newer medication due to PA.

hQ16=Modifieda diagnosis to obtain PA.

Table 4. Pairwise comparisons for the latent class analysis (LCA) model with distal outcomesa.
OutcomeLatent classMean differenceSEP value
Q14b1c vs 2d0.8210.184<.001
1 vs 3e−0.0860.16.591
1 vs 4f−0.4390.13.001
2 vs 3−0.9070.182<.001
2 vs 4−1.260.146<.001
3 vs 4−0.3530.15.019
Q15g1 vs 20.50.161.002
1 vs 3−0.1180.13.362
1 vs 4−0.4260.107<.001
2 vs 3−0.6190.165<.001
2 vs 4−0.9260.133<.001
3 vs 4−0.3070.123.013
Q16h1 vs 20.2110.112.06
1 vs 3−0.1270.107.232
1 vs 4−0.4930.093<.001
2 vs 3−0.3380.112.003
2 vs 4−0.7040.094<.001
3 vs 4−0.3660.108.001

aAll models controlled for covariates.

bLABELS Q14=Prescribe a different medication due to prior authorization (PA) delays.

cClass 1=High Denial PA.

dClass 2=Low Volume PA.

eClass 3=Problematic Communication Issues.

fClass 4=Problematic PA Experiences.

gQ15=Avoid prescribing newer medication due to PA.

hQ16=Modified a diagnosis to obtain PA.


Principal Findings

We identified 4 distinct classes based on PA-related insurance interactions, PA volume, and various challenges they confront as part of the PA process. These qualitative distinctions have not been noted in the literature, which has focused on descriptively showing the prevalence of providers who encounter problems. This glosses over the fact that not all providers share the same sentiment or have identical PA experiences. Understanding the nature of these experiences and the composition of different subgroups may foster corrective actions to improve efficiency while decreasing provider burden. We also examined whether the unique classes are different in their respective clinical decisions regarding prescribing and diagnosis. The latter issue gets at the heart of how PA affects providers and the effect of PA on medical practice.

The largest Class 4 (Problematic PA Experiences) encountered the most problems in every facet of the PA process. Although they had low volumes of PA and spent very few hours engaged in PA, they reported waiting extensively for PA decisions, experiencing frequent denials, challenges with step therapy, responding to requests for additional clinical documentation, not being notified of approvals or denials, and encountering denials because of missing adverse medication effects, dosing issues, or formulation issues. In contrast, the smallest class endorsed waiting for PA decisions as the sole challenge faced. The 2 remaining classes endorsed a few problems but in no consistent or definable pattern that could distinguish their PA experiences.

All 4 classes were distinguished by minimal endorsement of 2 questions: how many PAs are completed in a week and the number of hours personally spent on PA. These patterns may indicate either that the medical providers have dedicated staff addressing these problems or that providers represent practices with relatively low volumes of PAs. In the latter case, they still encountered problems, as evidenced by the way they endorsed the other PA-related survey questions. Practically speaking, the sample consisted of a fair representation of different specialties, favoring a larger share of psychiatrists by design (45%) but including specialties like Internal Medicine and Dermatology that routinely have heavy PA exposure.

Modeling covariates helps to further characterize class membership. Of the 7 covariates, age, specialty, and patient load were the most prominent measures to distinguish class membership. Members of Class 1 (High Denial PA) were older than the reference Class 4 (Problematic PA Experiences) and more likely to be psychiatrists. Members of Class 2 (Low Volume PA) were also older than the reference class, suggesting that older providers did not see PA as problematic compared to the members belonging to the Problematic PA Experiences reference class. Patient load was relatively low for members of the Low Volume PA class and likewise Class 3 (Problematic Communication Issues) compared to the reference class. Taken together, providers in the Problematic PA Experiences class were younger and had higher patient loads compared to the other classes, suggesting that large practices with younger providers experience more significant issues with PA and may want to see system-wide changes to alleviate the burdens of PA.

This study also showed that there is a significant relationship between whether providers encounter difficult challenges with PA and 3 measures of their clinical decision-making. This is a strong indication that the burdensome experiences brought about through the PA process have the ability to change medical practice by altering the treatment decisions made by providers. This raises the potential that the actions that providers take to avoid PA-related burdens could have significant downstream implications for patient safety, including undesired clinical outcomes and threats to public health. We only examined the relationship between class membership and clinical decision-making at a high level of analysis. Future studies may want to break this down and examine further what contributes to the changes in clinical decision-making and whether this can be rectified in some fashion. This type of more detailed analysis could be quite informative and influential regarding health policy and practice.

Limitations

There are several limitations worth noting. First, the data are cross-sectional, providing only a glimpse of provider behaviors at one point in time. Longitudinal data would be required to infer some causal sequence relating, for example, provider-insurer interactions and determine if these behaviors and sentiments are stable or change progressively (for better or worse). This could entail a repeated measures design that samples provider-insurer interactions on numerous occasions and develops a model that includes change in provider and patient behaviors (ie, clinical outcomes). Second, although we sampled more than one specialty, there were several that were not included. Casting a wider net around different practice specialties might shed light on the extent of provider dissatisfaction and whether class structure is consistent across specialties. Included would be Oncology, Gastroenterology, Cardiology, and Nephrology since those specialists write a large volume of specialty prescriptions. Extending the study to include more practice types that differ by composition would also lend credence to how pervasive PA dissatisfaction is and whether it is volume or specialty dependent. Third, while dichotomization of the 12 class indicators was necessary from an analytic point of view, some of the items (eg, wait times, documentation burden) may contain significant gradations that would provide more information and richer class distinction. Therefore, the dichotomization of LCA indicators could obscure meaningful inter-class differences. Finally, the low response rate of 1.2% increases the risk of selection bias; specifically, those who are dissatisfied with PA or have issues revolving around PA may be overrepresented.

Comparison With Prior Work

Past studies have identified provider issues with PA; however, they have treated providers as a single undifferentiated population. This assumes providers will have the same reactions and results when interacting with insurance companies over PA. Moreover, the nature of relations between PA activities and provider clinical outcomes, eg, dissatisfaction with PA procedures, has been examined only at the bivariate level of analysis. This limits what we know about PA and provider behaviors to a very small slice of experiences that are examined in an isolated manner. In the present study, the inclusion of 12 indicators capturing a more holistic set of experiences can shed light on systemic factors that affect provider behaviors.

Conclusions

This study adds important insight into the effect of PA on providers’ experience. Factors such as age and patient load significantly influence the provider experience, as well as prescribing behaviors, which could lead to disparate health outcomes. Recognizing unique provider experiences can help facilitate optimization of patient care while decreasing provider burden.

Clinically, this study reveals a concerning trend that could have dangerous implications for patients. First, the study demonstrated that providers occasionally modify diagnoses in charts in order to avoid insurance denial or the need for PA. This means, in practical terms, that a patient with bipolar disorder may be diagnosed with “major depressive disorder” in order to get effective medication authorized if insurance only authorizes the medication for treatment of unipolar depression. The downstream implications of this can be quite pronounced, affecting public health measures that rely on analysis of this clinical data. For instance, public health officials might use the clinical data to promote health policy to address medical conditions that are not as prevalent as the records may show. Conversely, this could take important resources away from conditions that are underreported by health care providers.

Acknowledgments

This study was supported by funding from the National Institute of Mental Health, National Institutes of Health as part of a Small Business Technology Transfer Grant (1R41MH124600-01). The grant supported development of a prototype software-as-a-solution platform to expedite PA from the point-of-care. The National Institute of Mental Health had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data, preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.

Authors' Contributions

SS and LS contributed equally to the conceptualization of the survey and its contents. FQ engaged data management and conducted the statistical analysis. SS and LS contributed equally to the construction of the manuscript including editing the final draft. LS and FQ jointly conducted the mixture modeling and interpreted the model findings. LS provided supervision for the statistical analyses and writing up the results and creation of tables and figures.

Conflicts of Interest

SS is founder and CEO of Breezmed LLC which is building a software-as-a-solution prototype for prior authorization. He holds multiple patents for a method of optimizing medication selection and prior authorization.

Multimedia Appendix 1

Supplemental materials.

DOCX File, 16 KB

Multimedia Appendix 2

Supplemental eTable 1.

DOCX File, 22 KB

Multimedia Appendix 3

Supplemental eTable 2.

DOCX File, 17 KB

  1. 2018-2019 Academy of Managed Care Pharmacy Professional Practice Committee. Prior authorization and utilization management concepts in managed care pharmacy. J Manag Care Spec Pharm. Jun 2019;25(6):641-644. [CrossRef] [Medline]
  2. AMA. 2024 AMA Prior Authorization Physician Survey. 2024. URL: https://www.ama-assn.org/system/files/prior-authorization-survey.pdf [Accessed 2025-07-8]
  3. CoverMyMeds. ePA National Adoption Scorecard. 2019. URL: https://marketingbuilder.covermymeds.com/insights/scorecard [Accessed 2024-11-22]
  4. Bhattacharjee S, Murcko AC, Fair MK, Warholak TL. Medication prior authorization from the providers perspective: a prospective observational study. Res Social Adm Pharm. Sep 2019;15(9):1138-1144. [CrossRef] [Medline]
  5. Jones LK, Ladd IG, Gregor C, Evans MA, Graham J, Gionfriddo MR. Understanding the medication prior-authorization process: a case study of patients and clinical staff from a large rural integrated health delivery system. Am J Health Syst Pharm. Mar 19, 2019;76(7):453-459. [CrossRef] [Medline]
  6. Consensus Statement on Improving the Prior Authorization Process. Nov 2018. URL: https://www.ama-assn.org [Accessed 2014-11-22]
  7. Association AM. 2 Big insurers take small steps to ease prior authorization burden. 2023. URL: https:/​/www.​ama-assn.org/​practice-management/​prior-authorization/​2-big-insurers-take-small-steps-ease-prior-authorization [Accessed 2024-11-22]
  8. AMCP Electronic Prior Authorization Work Group. Proceedings of the AMCP partnership forum: NCPDP electronic prior authorization standards-building a managed care implementation plan. J Manag Care Spec Pharm. Jul 2015;21(7):545-550. [CrossRef] [Medline]
  9. Hoffman S, Case Western Reserve University School of Law. Step therapy: legal and ethical implications of a cost-cutting measure. 2018. URL: https://scholarlycommons.law.case.edu/cgi/viewcontent.cgi?article=3009&context=faculty_publications [Accessed 2025-06-20]
  10. Miller AP, Shor R, Waites T, Wilson BH. Prior authorization reform for better patient care. J Am Coll Cardiol. May 1, 2018;71(17):1937-1939. [CrossRef] [Medline]
  11. Nayak RK, Pearson SD. The ethics of ‘fail first’: guidelines and practical scenarios for step therapy coverage policies. Health Aff (Millwood). Oct 2014;33(10):1779-1785. [CrossRef] [Medline]
  12. Barnett BS, Bodkin JA. A survey of American psychiatrists concerning medication prior authorization requirements. J Nerv Ment Dis. Jul 2020;208(7):566-573. [CrossRef] [Medline]
  13. Salzbrenner SG, Lydiatt M, Helding B, et al. Influence of prior authorization requirements on provider clinical decision-making. Am J Manag Care. Jul 2023;29(7):331-337. [CrossRef] [Medline]
  14. McCutcheon A. Latent Class Analysis. Sage Publications; 1987. [CrossRef]
  15. Collins LM, Lanza ST. Latent Class and Latent Transition Analysis: With Applications in the Social, Behavioral, and Health Sciences. John Wiley & Sons; 2010. [CrossRef]
  16. Salzbrenner SG, McAdam-Marx C, Lydiatt M, Helding B, Scheier LM, Hill PW. Perceptions of prior authorization by use of electronic prior authorization software: a survey of providers in the United States. J Manag Care Spec Pharm. Oct 2022;28(10):1121-1128. [CrossRef] [Medline]
  17. Cohen J. The cost of dichotomization. Appl Psychol Meas. Jun 1983;7(3):249-253. [CrossRef]
  18. MacCallum RC, Zhang S, Preacher KJ, Rucker DD. On the practice of dichotomization of quantitative variables. Psychol Methods. Mar 2002;7(1):19-40. [CrossRef] [Medline]
  19. Muthén LK, Muthén BO. Mplus User’s Guide. 8th ed. Muthén & Muthén; 2017. URL: https://www.statmodel.com/download/usersguide/MplusUserGuideVer_8.pdf [Accessed 2025-06-20]
  20. White IR, Royston P, Wood AM. Multiple imputation using chained equations: Issues and guidance for practice. Stat Med. Feb 20, 2011;30(4):377-399. [CrossRef] [Medline]
  21. Buuren S, Groothuis-Oudshoorn C. MICE: multivariate imputation by chained equations. Stat Soft. 2011;45. [CrossRef]
  22. Van Buuren S, Brand JPL, Groothuis-Oudshoorn CGM, Rubin DB. Fully conditional specification in multivariate imputation. J Stat Comput Simul. Dec 2006;76(12):1049-1064. [CrossRef]
  23. Graham JW, Olchowski AE, Gilreath TD. How many imputations are really needed? Some practical clarifications of multiple imputation theory. Prev Sci. Sep 2007;8(3):206-213. [CrossRef] [Medline]
  24. Akaike H. Likelihood of a model and information criteria. J Econom. May 1981;16(1):3-14. [CrossRef]
  25. Schwarz G. Estimating the dimension of a model. Ann Statist. 1978;6(2):461-464. [CrossRef]
  26. Celeux G, Soromenho G. An entropy criterion for assessing the number of clusters in a mixture model. J Classif. Sep 1996;13(2):195-212. [CrossRef]
  27. Nylund KL, Asparouhov T, Muthén BO. Deciding on the number of classes in latent class analysis and growth mixture modeling: a Monte Carlo simulation study. Struct Equ Modeling. Oct 23, 2007;14(4):535-569. [CrossRef]
  28. Garrett ES, Zeger SL. Latent class model diagnosis. Biometrics. Dec 2000;56(4):1055-1067. [CrossRef] [Medline]
  29. Asparouhov T, Muthén BO. Auxiliary variables in mixture modeling: three-step approaches using M plus. Struct Equ Modeling. Jul 3, 2014;21(3):329-341. [CrossRef]
  30. Lanza ST, Tan X, Bray BC. Latent class analysis with distal outcomes: a flexible model-based approach. Struct Equ Modeling. Jan 2013;20(1):1-26. [CrossRef] [Medline]
  31. Muthén LK, Muthén BO. How to use a Monte Carlo study to decide on sample size and determine power. Struct Equ Modeling. Oct 2002;9(4):599-620. [CrossRef]


BCH: Bolck, Croon, and Hagenaars
IRP: item response probabilities
LCA: latent class analysis
LL: log-likelihood statistical fit index
NP: nurse practitioner
PA: prior authorization


Edited by Javad Sarvestan; submitted 02.04.25; peer-reviewed by Adekunle Adeoye, Cordia Ogbeta, Mary-Jane Ugbor; final revised version received 30.04.25; accepted 02.05.25; published 29.07.25.

Copyright

© Stephen Salzbrenner, Lawrence Scheier, Fang Qiu. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 29.7.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.