To the Editor: Severe acute pancreatitis (SAP) is a common critical disease in gastroenterology, and 69.3% of patients have acute kidney injury (AKI).[1] Early diagnosis and treatment are very important in improving the prognosis of these patients.
In recent years, machine learning models have been widely used in the medical field due to their excellent prediction performance. Some scholars have developed machine learning models that can solve the AKI prediction problem in specific populations and achieved good results.[2] However, considering that the occurrence of AKI in patients with pancreatitis is also closely related to factors such as amylase and inflammatory markers, it is necessary to construct an AKI prediction model for the pancreatitis population.
This study was carried out in four tertiary medical centers in China. Patients with SAP were used as the research subjects, and machine learning was used to establish an AKI prediction model. The model can identify the AKI risk by detecting changes in the physiological function of patients. We also apply eXplainable artificial intelligence (XAI) technology to model interpretation to provide an objective, efficient, and accurate AKI prediction auxiliary system for clinicians.
According to the inclusion and exclusion criteria [Supplementary Method 1, https://links.lww.com/CM9/B906], the clinical data of SAP patients who were treated in Changshu No. 2 People’s Hospital (CsSH), Zhongda Hospital (ZdH), Pizhou People’s Hospital (PzPH), and Pizhou Hospital of Traditional Chinese Medicine (PzTCMH) from January 2017 to January 2023 were collected. Finally, a total of 772 patients were included and classified into four datasets. As the primary outcome, AKI was diagnosed according to the Kidney Disease: Improving Global Outcomes (KDIGO) guidelines [Supplementary Method 2, https://links.lww.com/CM9/B906]. The study was approved by the Ethics Committee of Changshu No. 2 People’s Hospital (No. 2017-002). Written consent was obtained from all subjects.
All available clinical data in the electronic medical record system were listed, and feature selection was performed based on expert opinions [Supplementary Table 1, https://links.lww.com/CM9/B906]. The processing of missing values is shown in Supplementary Method 3, https://links.lww.com/CM9/B906. Min–max normalization was used to convert feature vectors into “unit vectors” so that they have a unified standard in function calculation.
Supplementary Figure 1 [https://links.lww.com/CM9/B906] provides the outline of the analysis plan of this study.
Random forest (RF) and logistic regression (LR) were used for feature screening; features significantly related to the results were selected and dimensionality reduction was performed. In the feature screening process, all the data of each hospital were divided into training and test datasets at a ratio of 7:3. A random grid search was used to cross-validate the random splitting of repeated data and model training to reduce the deviation caused by unreasonable data partitioning. The model parameters were optimized to find the most suitable model parameters for feature screening.
Using the filtered features, the prediction model was established using LR, RF, support vector machine (SVM), decision tree (DT), and extreme gradient boosting (XGBoost). All models were developed using a training set composed of data combinations from three hospitals. External tests were performed based on the data from the fourth hospital. The average value of each performance measurement of the model was calculated, including the area under the curve (AUC), accuracy, recall, precision, and F1 score.
To ensure the rationality of the model and to determine the possible AKI predictors, this study uses Shapley addition interpretation (SHAP) to explain the XGBoost model. On this basis, we used the TreeExplainer method to combine multiple local interpretations in the dataset to draw SHAP summary plots to clearly show the eigenvalue distribution. The SHAP force plot was also included, taking a single sample as an example to show the influence of eigenvalues on the prediction results.
LR and RF measure the importance of variables in the data attribute column after fitting the data. The features are sorted in descending order according to the relative importance of the features [Supplementary Figure 2A,B, https://links.lww.com/CM9/B906]. The parameters for feature selection are listed in Supplementary Result 1, https://links.lww.com/CM9/B906. Subsequently, the Venn diagram was used to take the intersection of features that accounted for more than 80% of the importance of the two models [Supplementary Figure 2C, https://links.lww.com/CM9/B906]. A total of 10 features, such as Sequential Organ Failure Assessment (SOFA), Acute Physiology and Chronic Health Evaluation (APACHE II), platelet count (PLT), blood urea nitrogen (BUN), blood uric acid (UA), triglyceride (TG), blood amylase (AMY), alanine aminotransferase (ALT), lactate (Lac), and potassium (K+), were selected for further construction of the model.
In the supplementary table section, we described in detail the 772 × 10 valid datasets from four hospitals that were ultimately used for model construction [Supplementary Table 2, https://links.lww.com/CM9/B906].
The LR, RF, XGBoost, SVM, and DT models were trained and tested using the selected features. XGBoost performs better than the other four ML methods, combining the five evaluation indicators of AUC, accuracy, precision, recall, and F1-score [Supplementary Figure 3 and Supplementary Table 3, https://links.lww.com/CM9/B906]. We also listed the hyperparameters used in the XGBoost training process in Supplementary Result 2, https://links.lww.com/CM9/B906. In addition, Supplementary Table 4, https://links.lww.com/CM9/B906 summarizes the performance of the XGBoost model in different queues. Although the same ML method was used, the model performs slightly worse when CsSH, PzTCMH, and PzPH were used as training sets and ZdH was used as the validation set.
The average SHAP value amplitude of the bar chart and the bee colony diagram [Supplementary Figure 4, https://links.lww.com/CM9/B906 and Figure 1] show that the APACHE II and SOFA scores are always in the top two. The SHAP force plot also provides valuable insights into the contribution of factors at the sample scale from the perspective of instance prediction [Supplementary Figure 5, https://links.lww.com/CM9/B906]. BUN, ALT, UA, and AMY are crucial in the output results of the model and are positively correlated with AKI risk.
Interpretation of XGBoost models by SHAP value for each hospital. (A) CsSH cohort, (B) PzTCMH cohort, (C) PzPH cohort, and (D) ZdH cohort. In the bee colony graph, each row represents a feature, a point represents a sample, and the position on the X-axis represents the influence of the feature on the model prediction results. ALT: Alanine aminotransferase; AMY: Blood amylase; APACHE II: Acute Physiology and Chronic Health Evaluation; BUN: Blood urea nitrogen; CsSH: Changshu Second Hospital; K+: Potassium; Lac: Lactate; PCT: Procalcitonin; PzPH: Pizhou People’s Hospital; PzTCMH: Pizhou Hospital of Traditional Chinese Medicine; SHAP: Shapley additive explanations; SOFA: Sequential Organ Failure Assessment; TG: Triglyceride; UA: Blood uric acid; XGBoost: Extreme gradient boosting; ZdH: Zhongda Hospital.
In this multicenter study, we validated the accuracy of machine learning risk prediction models. The model we constructed can accurately predict the AKI risk in patients with only a small number of clinical variables. Compared with previous studies based on the Medical Information Mart for Intensive Care IV database, the prediction performance of the machine learning model is significantly better than that of the Beside index for severity in acute pancreatitis (BISAP), Ranson scoring system, APACHE II score, and nomogram.[3]
Among all ML models, XGBoost has higher AKI prediction accuracy, which may be because XGBoost is a tree lifting algorithm machine learning system for integrated learning models. The algorithm can prevent overfitting of the model and has good robustness. Of course, further prospective randomized controlled trials are needed in the future to compare the impact of predictive models on clinical decision-making to explore the true effectiveness of XGBoost.
Compared with the other three hospitals, the model performs poorly in ZdH, which may be due to the imbalance in data volume. The model developed based on the training set with a larger data volume has better prediction performance. However, the current interoperability of data between medical institutions is still the greatest artificial intelligence challenge in the medical field.[4] Further improving the degree of digitization in the medical field and realizing the sharing of electronic medical record data in medical institutions will be necessary in the future.
In the clinical decision support system, ensuring the reliability of the decision-making process is a prerequisite for doctors to make diagnoses. The General Data Protection Regulation (GDPR) recently adopted by the European Union clearly states that users have the right to demand a logical explanation when automatic decision-making occurs.[5] Therefore, applying interpretable artificial intelligence to the medical field is necessary. The TreeExplainer method is used to explain XGBoost in this study. This is an estimation method for SHAP values, which provides a more detailed explanation function for the characteristics of the DT model. In summary, model-based feature interpretation will help doctors to make more informed decisions rather than relying entirely on algorithm results.
Conflicts of interestNone.
References 1. Zhou J, Li Y, Tang Y, Liu F, Yu S, Zhang L, et al. Effect of acute kidney injury on mortality and hospital stay in patient with severe acute pancreatitis. Nephrology (Carlton) 2015;20:485–491. doi: 10.1111/nep.12439. 2. Luo XQ, Kang YX, Duan SB, Yan P, Song GB, Zhang NY, et al. Machine learning–based prediction of acute kidney injury following pediatric cardiac surgery: Model development and validation study. J Med Internet Res 2023;25:e41142. doi: 10.2196/41142. 3. Wu S, Zhou Q, Cai Y, Duan X. Development and validation of a prediction model for the early occurrence of acute kidney injury in patients with acute pancreatitis. Ren Fail 2023;45:2194436. doi: 10.1080/0886022X.2023.2194436. 4. Skripcak T, Belka C, Bosch W, Brink C, Brunner T, Budach V, et al. Creating a data exchange strategy for radiotherapy research: Towards federated databases and anonymised public datasets. Radiother Oncol 2014;113:303–309. doi: 10.1016/j.radonc.2014.10.001. 5. Rumbold JMM, Pierscionek B. The effect of the general data protection regulation on medical research. J Med Internet Res 2017;19:e47. doi: 10.2196/jmir.7108.
Comments (0)