The COVID-19 pandemic has posed a great challenge to the medical community because little is known about its clinical course, therapeutic options, and laboratory monitoring tools for diagnosis, ...prognosis, and surveillance. This review focuses on immune biomarkers that can be measured in peripheral blood in a clinical laboratory under routine conditions to monitor the innate immune system response in the acute phase, as well as the adaptive immune response established both after infection and vaccination.
A PubMed search was performed covering January 2020 to June 2021 to extract biomarkers suitable for monitoring the immune response and outcome of COVID-19 and therapeutic interventions, including vaccination.
To monitor the innate immune response, cytokines such as interleukin-6 or acute phase reactants such as C-reactive protein or procalcitonin can be measured on autoanalyzers complemented by automated white blood cell differential counts. The adaptive immune response can be followed by commercially available enzyme-linked immune spot assays to assess the specific activation of T cells or by monitoring immunoglobulin A (IgA), IgM, and IgG antibodies in serum to follow B-cell activation. As antigens of the SARS-CoV-2 virus, spike and nucleocapsid proteins are particularly suitable and allow differentiation between the immune response after infection or vaccination.
Routine immune monitoring of COVID-19 is feasible in clinical laboratories with commercially available instruments and reagents. Strategies such as whether biomarkers reflecting the response of the innate and adaptive immune system can be used to make predictions and assist in individualizing therapeutic interventions or vaccination strategies need to be determined in appropriate clinical trials. Promising preliminary data are already available based on single-center reports and completed or ongoing vaccination trials.
The individualization of immunosuppression is an approach for preventing rejection in the early phase after transplantation and for avoiding the long-term side effects of over immunosuppression. ...Pharmacodynamic markers, either specific or nonspecific, have been proposed as complementary tools to drug monitoring of immunosuppressive drugs. A key event in graft rejection is the activation and proliferation of the recipient's lymphocytes, particularly T cells. Activated T cells express surface receptors, such as CD25 (the IL-2 receptor) and CD71 (the transferrin receptor), or co-stimulatory molecules (CD26, CD27, CD28, CD30, CD154 or CD40L, and CD134). Both surface marker expression and cell proliferation are predominately assessed by flow cytometry. Protocols have been established and utilized for both in vitro and ex vivo investigations with either isolated lymphocytes or whole blood.
This article reviews the current body of research regarding the use of lymphocyte proliferation and surface activation markers with an emphasis on T cells. Experimental and clinical results related to these markers, as well as methodological issues and open questions, are addressed.
► T cell activation leads to cell surface marker expression and cell proliferation. ► The innate immune system enhances the adaptive alloimmune response. ► Surface marker expression and cell proliferation can be assessed by flow cytometry. ► Immunosuppression affects cell surface marker expression and cell proliferation. ► Surface markers and cell proliferation have a potential to guide immunosuppression.
Donor‐derived cell‐free DNA (dd‐cfDNA) is a noninvasive biomarker for comprehensive monitoring of allograft injury and rejection in kidney transplantation (KTx). dd‐cfDNA quantification of copies/mL ...plasma (dd‐cfDNAcp/mL) was compared to dd‐cfDNA fraction (dd‐cfDNA%) at prespecified visits in 189 patients over 1 year post KTx. In patients (N = 15, n = 22 samples) with biopsy‐proven rejection (BPR), median dd‐cfDNA(cp/mL) was 3.3‐fold and median dd‐cfDNA(%) 2.0‐fold higher (82 cp/mL; 0.57%, respectively) than medians in Stable Phase patients (N = 83, n = 408) without rejection (25 cp/mL; 0.29%). Results for acute tubular necrosis (ATN) were not significantly different from those with biopsy‐proven rejection (BPR). dd‐cfDNA identified unnecessary biopsies triggered by a rise in plasma creatinine. Receiver operating characteristic (ROC) analysis showed superior performance (P = .02) of measuring dd‐cfDNA(cp/mL) (AUC = 0.83) compared to dd‐cfDNA(%) (area under the curve AUC = 0.73). Diagnostic odds ratios were 7.31 for dd‐cfDNA(cp/mL), and 6.02 for dd‐cfDNA(%) at thresholds of 52 cp/mL and 0.43%, respectively. Plasma creatinine showed a low correlation (r = 0.37) with dd‐cfDNA(cp/mL). In a patient subset (N = 24) there was a significantly higher rate of patients with elevated dd‐cfDNA(cp/mL) with lower tacrolimus levels (<8 μg/L) compared to the group with higher tacrolimus concentrations (P = .0036) suggesting that dd‐cfDNA may detect inadequate immunosuppression resulting in subclinical graft damage. Absolute dd‐cfDNA(cp/mL) allowed for better discrimination than dd‐cfDNA(%) of KTx patients with BPR and is useful to avoid unnecessary biopsies.
Donor‐derived cell‐free DNA concentrations in combination with fractions measured repeatedly after kidney transplantation allow for clinical laboratory monitoring of graft damage, including rejection, to aid personalized patient care.
Abstract
Background
Donor-derived cell-free DNA (dd-cfDNA) is reportedly a valuable tool for graft surveillance following kidney transplantation (KTx). Possible changes in dd-cfDNA(%) reference ...values over time have not been evaluated. For long-term monitoring after KTx, changes in host cfDNA might represent a biasing factor in dd-cfDNA(%) determinations.
Methods
Plasma samples were obtained (n = 929) 12–60 months after engraftment in a cross-sectional cohort of 303 clinically stable KTx recipients. Total cfDNA(copies/mL), dd-cfDNA(%), and dd-cfDNA(copies/mL) were determined using droplet-digital PCR. Stability of threshold values in these stable KTx recipients over time was assessed by 80th, 85th, and 90th quantile regression.
Results
Upper percentiles of total cfDNA showed a significant decline of −1902, −3589, and −4753 cp/mL/log(month) (P = 0.014, <0.001, and 0.017, respectively), resulting in increasing dd-cfDNA(%) percentiles by 0.25, 0.46, and 0.72%/log(month) (P = 0.04, 0.001, and 0.002, respectively), with doubling of the 85th percentile value by 5 years. In contrast, dd-cfDNA(cp/mL) was stable during the observation period (P = 0.52, 0.29, and 0.39). In parallel increasing white blood cell counts and decreasing tacrolimus concentrations over time were observed. After 5 years, the median total cfDNA was still 1.6-fold (P < 0.001) higher in KTx recipients than in healthy controls (n = 135) and 1.4-fold (P < 0.001) higher than patients with other medical conditions (n = 364).
Conclusions
The time-dependent decrease of host cfDNA resulted in an apparent increase of dd-cfDNA fraction in stable KTx patients. For long-term surveillance, measurement of absolute dd-cfDNA concentrations appears to be superior to percentages to minimize false positive results.
With current treatment regimens, a relatively high proportion of transplant recipients experience underimmunosuppression or overimmunosuppression. Recently, several promising biomarkers have been ...identified for determining patient alloreactivity, which help in assessing the risk of rejection and personal response to the drug; others correlate with graft dysfunction and clinical outcome, offering a realistic opportunity for personalized immunosuppression. This consensus document aims to help tailor immunosuppression to the needs of the individual patient. It examines current knowledge on biomarkers associated with patient risk stratification and immunosuppression requirements that have been generally accepted as promising. It is based on a comprehensive review of the literature and the expert opinion of the Biomarker Working Group of the International Association of Therapeutic Drug Monitoring and Clinical Toxicology. The quality of evidence was systematically weighted, and the strength of recommendations was rated according to the GRADE system. Three types of biomarkers are discussed: (1) those associated with the risk of rejection (alloreactivity/tolerance), (2) those reflecting individual response to immunosuppressants, and (3) those associated with graft dysfunction. Analytical aspects of biomarker measurement and novel pharmacokinetic-pharmacodynamic models accessible to the transplant community are also addressed. Conventional pharmacokinetic biomarkers may be used in combination with those discussed in this article to achieve better outcomes and improve long-term graft survival. Our group of experts has made recommendations for the most appropriate analysis of a proposed panel of preliminary biomarkers, most of which are currently under clinical evaluation in ongoing multicentre clinical trials. A section of Next Steps was also included, in which the Expert Committee is committed to sharing this knowledge with the Transplant Community in the form of triennial updates.
Background:
Although there is evidence that the
CYP3A4*22
variant should be considered in tacrolimus dosing in renal transplantation, its impact beyond tacrolimus dose requirements remains ...controversial.
Methods:
In a cohort of 121 kidney transplant recipients, we analyzed the
CYP3A4*1B
,
CYP3A4*22
, and
CYP3A5*3
alleles and the
ABCB1
variants
1236C>T
,
2677G>T/A
, and
3435C>T
for their impact on exposure and dose requirement. Relevant clinical outcome measures such as acute rejection within the first year after transplantation, delayed graft function, and renal function at discharge (estimated glomerular filtration rate) were evaluated.
Results:
Extensive metabolizer (n = 17,
CYP3A4*1/*1
carriers with at least one
CYP3A5*1
allele) showed significantly higher tacrolimus dose requirement (P = 0.004) compared with both intermediate metabolizer (IM, n = 93,
CYP3A5*3/*3
plus
CYP3A4*1/*1
or
CYP3A4*22
carriers plus one
CYP3A5*1
allele), and poor metabolizer (n = 11,
CYP3A4*22
allele in combination with
CYP3A5*3/*3)
after onset of therapy. Significantly higher dose requirement was observed in CYP3A5 expressers (P = 0.046) compared with non-expressers again at onset of therapy. Using the log additive genetic model, the area under the curve for the total observation period up to 16 days was significantly associated with the
CYP3A5*3
genotype (P = 3.34 × 10
-4
) as well as with the IM or extensive metabolizer phenotype (P = 1.54 × 10
-4
), even after adjustment for multiple testing. Heterozygous carriers for
CYP3A4*22
showed significantly higher areas under the curve than the
CYP3A4*1/*1
genotype in the second week post-transplantation (adjusted P = 0.016). Regarding clinical outcomes, acute rejection was significantly associated with human leukocyte antigen mismatch (≥3 alleles; OR = 12.14, 95% CI 1.76, 525.21, P = 0.019 after correction for multiple testing). Graft recipients from deceased donors showed higher incidende of delayed graft function (OR 7.15, 95% CI 2.23, 30.46, adjusted P = 0.0008) and a lower estimated glomerular filtration rate at discharge (P = 0.0001). Tested
CYP3A4
or
CYP3A5
variants did not show any effects on clinical outcome parameters.
ABCB1
variants did neither impact on pharmacokinetics nor on clinical endpoints.
Conclusion:
At our transplantation center, both
CYP3A5*3
and, to a lesser extent,
CYP3A4*22
affect tacrolimus pharmacokinetics early after onset of therapy with consequences for steady-state treatment in routine clinical practice.
BackgroundLiquid chromatography-tandem mass spectrometry (LC-MS/MS) is a sensitive method with high specificity. However, its routine use in the clinical laboratory is hampered by its high complexity ...and lack of automation. Studies demonstrate excellent analytical performance using the first fully automated LC-MS/MS for 25-hydroxy vitamin D and immunosuppressant drugs (ISD) in hospital routine laboratories. ObjectivesOur objectives were (1) to verify the suitability of an automated LC-MS/MS in a commercial laboratory, which differs from the needs of hospital laboratories, and (2) examine its usability among operators with various professional backgrounds. MethodsWe assessed the analytical assay performance for vitamin D and the ISDs cyclosporine A and tacrolimus over five months. The assays were compared to an identical analyzer in a hospital laboratory, to in-house LC-MS/MS methods, and to chemiluminescent microparticle immunoassays (CMIA). Nine operators evaluated the usability of the fully automated LC-MS/MS system by means of a structured questionnaire. ResultsThe automated system exhibited a high precision (CV < 8%), accuracy (bias < 7%) and good agreement with concentrations of external quality assessment (EQA) samples. Comparable results were obtained with an identical analyzer in a hospital routine laboratory. Acceptable median deviations of results versus an in-house LC-MS/MS were observed for 25-OH vitamin D3 (-10.6%), cyclosporine A (-4.3%) and tacrolimus (-6.6%). The median bias between the automated system and immunoassays was only acceptable for 25-OH vitamin D3 (6.6%). All users stated that they had had a good experience with the fully automated LC-MS/MS system. ConclusionsA fully automated LC-MS/MS can be easily integrated for routine diagnostics in a commercial laboratory.
For decades, oral anticoagulation has been based on vitamin K antagonist such as warfarin, which requires pharmacodynamic (PD) drug monitoring to guide the therapy. The drug effect is measured by the ...clotting test prothrombin time and expressed as international normalized ratio. New direct oral anticoagulants are increasingly used in fixed-dose regimens but are licensed without any therapy monitoring. However, extensive clinical experiences have demonstrated that interindividual variations in the response to the therapy with direct oral anticoagulants do exist. In situations such as bleeding or thrombosis, therapeutic drug monitoring could be useful. Unfortunately, global coagulation assays such as the prothrombin time or the activated partial thrombin time are not suitable for this purpose. To measure drug concentrations, more specific coagulation test can be used if they are externally calibrated with the respective drugs. For the direct thrombin inhibitor dabigatran etexilate, a calibrated diluted thrombin time or ecarin clotting time can be used, whereas for anti-factor Xa drugs such as rivaroxaban, apixaban, edoxaban, and betrixaban, calibrated anti-factor Xa assays are appropriate. However, the gold standard to measure drug concentrations is LC-MS/MS. The variation in bleeding and thrombotic events noted with both drug classes under fixed-dose conditions suggests additional interindividual PD differences. Therefore, PD monitoring to individualize the therapy may be an option. For dabigatran, this is the inhibition of thrombin formation and for anti-factor Xa drugs, the inhibition of factor Xa activity, which can be followed using the functional assays mentioned above but without calibration. Alternatively, thrombin generation assays have been proposed for both drug classes. So far, not many clinical data have been published about the potentially beneficial effects of PD monitoring for dose individualization. The assay platforms for PD monitoring are present in many clinical laboratories, but efforts are needed to validate and standardize available assays to perform appropriate clinical trials.
This study is the extension of the COVAG study. We compared two RATs, the Panbio COVID-19 Ag Rapid Test (Abbott) and the SD Biosensor Q SARS-CoV-2 Rapid Antigen Test (Roche), against RT-PCR on the ...foil of new variants.
We included 888 all-comers at a diagnostic center between October 20, 2021, and March 18, 2022. RT-PCR-positive samples with a Ct value ≤32 were examined for SARS-CoV-2 variants.
The sensitivity of the Abbott-RAT and Roche-RAT were 65 and 67%, respectively. For both RATs, lower Ct values were significantly correlated with higher sensitivity. For samples with Ct values ≤25, the sensitivities of the Roche-RAT and of the Abbott-RAT were 96 and 95%, for Ct values 25-30 both were 19%, and for Ct values ≥30 they were 6 and 2%, respectively. The RATs had substantially higher sensitivities in symptomatic than asymptomatic participants (76, 77%, vs. 29, 31%, for Abbott-RAT, Roche-RAT, respectively) and in participants referred to testing by their primary care physician (84, 85%) compared to participants who sought testing due to referral by the health department (55, 58%) or a warning by the Corona-Warn-App (49, 49%). In persons with self-reported previous COVID-19 sensitivities were markedly lower than in patients without previous COVID-19: 27% vs. 75% for Roche-RAT and 27% vs. 73% for Abbott-RAT. We did not find significant correlation between vaccination status and sensitivity. The Omicron variant was detected with a sensitivity of 94 and 92%, the delta variant with a sensitivity of 80 and 80% for Abbott-RAT and Roche-RAT, respectively. This difference is attributable to the lower Ct values of the Omicron samples compared to the Delta samples. When adjusted for the Ct value, a multivariate logistic regression did not show a significant difference between Omicron and Delta. In terms of sensitivity, we found no significant difference between the wild-type and the Omicron and Delta variants, but a significantly lower sensitivity to the alpha variant compared to the other variants.The specificities were > 99% overall.
Rapid diagnostic testing for SARS-Cov-2 antigens is used to combat the ongoing pandemic. In this study we aimed to compare two RDTs, the SD Biosensor Q SARS-CoV-2 Rapid Antigen Test (Roche) and the ...Panbio COVID-19 Ag Rapid Test (Abbott), against rRT-PCR.
We included 2,215 all-comers at a diagnostic center between February 1 and March 31, 2021. rRT-PCR-positive samples were examined for SARS-CoV-2 variants.
Three hundred and thirty eight participants (15%) were rRT-PCR-positive for SARS-CoV-2. The sensitivities of Roche-RDT and Abbott-RDT were 60.4 and 56.8% (
< 0.0001) and specificities 99.7% and 99.8% (
= 0.076). Sensitivity inversely correlated with rRT-PCR-Ct values. The RDTs had higher sensitivities in individuals referred by treating physicians (79.5%, 78.7%) than in those referred by health departments (49.5%, 44.3%) or tested for other reasons (50%, 45.8%), in persons without any comorbidities (74.4%, 71%) compared to those with comorbidities (38.2%, 34.4%), in individuals with COVID-19 symptoms (75.2%, 74.3%) compared to those without (31.9%, 23.3%), and in the absence of SARS-CoV-2 variants (87.7%, 84%) compared to Alpha variant carriers (77.1%, 72.3%). If 10,000 symptomatic individuals are tested of which 500 are truly positive, the RDTs would generate 38 false-positive and 124 false-negative results. If 10,000 asymptomatic individuals are tested, including 50 true positives, 18 false-positives and 34 false-negatives would be generated.
The sensitivities of the two RDTs for asymptomatic SARS-CoV-2 carriers are unsatisfactory. Their widespread use may not be effective in the ongoing SARS-CoV-2 pandemic. The virus genotype influences the sensitivity of the two RDTs. RDTs should be evaluated for different SARS-CoV-2 variants.