Background and Aims
There is limited evidence on the relationship between retention in opioid agonist treatment for opioid dependence and characteristics of treatment prescribers. This study ...estimated retention in buprenorphine and methadone treatment and its relationship with person, treatment and prescriber characteristics.
Design
Retrospective longitudinal study.
Setting
New South Wales, Australia.
Participants
People entering the opioid agonist treatment programme for the first time between August 2001 and December 2015.
Measurements
Time in opioid agonist treatment (primary outcome) was modelled using a generalized estimating equation model to estimate associations with person, treatment and prescriber characteristics.
Findings
The impact of medication type on opioid agonist treatment retention reduced over time; the risk of leaving treatment when on buprenorphine compared with methadone was higher among those who entered treatment earlier e.g. 2001–03: odds ratio (OR) = 1.59, 95% confidence interval (CI) = 1.45–1.75 and lowest among those who entered most recently (2013–15: OR = 1.23, 95% CI = 1.11–1.36). In adjusted analyses, risk of leaving was reduced among people whose prescriber had longer tenure of prescribing (e.g. 3 versus 8 years: OR = 0.94, 95% CI = 0.93–0.95) compared with prescribers with shorter tenure. Aboriginal and Torres Strait Islander people, being of younger age, past‐year psychosis disorder and having been convicted of more criminal charges in the year prior to treatment entry were associated with increased risk of leaving treatment.
Conclusion
In New South Wales, Australia, retention in buprenorphine treatment for opioid dependence, compared with methadone, has improved over time since its introduction in 2001. Opioid agonist treatment retention is affected not only by characteristics of the person and his or her treatment, but also of the prescriber, with those of longer prescribing tenure associated with increased retention of people in opioid agonist treatment.
Background and Aims
Studies often rely upon self‐report and biological testing methods for measuring illicit drug use, although evidence for their agreement is limited to specific populations and ...self‐report instruments. We aimed to examine comprehensively the evidence for agreement between self‐reported and biologically measured illicit drug use among all major illicit drug classes, biological indicators, populations and settings.
Methods
We systematically searched peer‐reviewed databases (Medline, Embase and PsycINFO) and grey literature. Included studies reported 2 × 2 table counts or agreement estimates comparing self‐reported and biologically measured use published up to March 2022. With biological results considered to be the reference standard and use of random‐effect regression models, we evaluated pooled estimates for overall agreement (primary outcome), sensitivity, specificity, false omission rates (proportion reporting no use that test positive) and false discovery rates (proportion reporting use that test negative) by drug class, potential consequences attached to self‐report (i.e. work, legal or treatment impacts) and time‐frame of use. Heterogeneity was assessed by inspecting forest plots.
Results
From 7924 studies, we extracted data from 207 eligible studies. Overall agreement ranged from good to excellent (> 0.79). False omission rates were generally low, while false discovery rates varied by setting. Specificity was generally high but sensitivity varied by drug, sample type and setting. Self‐report in clinical trials and situations of no consequences was generally reliable. For urine, recent (i.e. past 1–4 days) self‐report produced lower sensitivity and false discovery rates than past month. Agreement was higher in studies that informed participants biological testing would occur (diagnostic odds ratio = 2.91, 95% confidence interval = 1.25–6.78). The main source of bias was biological assessments (51% studies).
Conclusions
While there are limitations associated with self‐report and biological testing to measure illicit drug use, overall agreement between the two methods is high, suggesting both provide good measures of illicit drug use. Recommended methods of biological testing are more likely to provide reliable measures of recent use if there are problems with self‐disclosure.
Abstract
Background
Finger-stick point-of-care and dried blood spot (DBS) hepatitis C virus (HCV) RNA testing increases testing uptake and linkage to care. This systematic review evaluated the ...diagnostic accuracy of point-of-care testing and DBS to detect HCV RNA.
Methods
Bibliographic databases and conference presentations were searched for eligible studies. Meta-analysis was used to pool estimates.
Results
Of 359 articles identified, 43 studies were eligible and included. When comparing the Xpert HCV Viral Load Fingerstick assay to venous blood samples (7 studies with 987 samples), the sensitivity and specificity for HCV RNA detection was 99% (95% confidence interval CI, 97%–99%) and 99% (95% CI, 94%–100%) and for HCV RNA quantification was 100% (95% CI, 93%–100%) and 100% (95% CI, 94%–100%). The proportion of invalid results following Xpert HCV Viral Load Fingerstick testing was 6% (95% CI, 3%–11%). When comparing DBS to venous blood samples (28 studies with 3988 samples) the sensitivity and specificity for HCV RNA detection was 97% (95% CI, 95%–98%) and 100% (95% CI, 98%–100%) and for HCV RNA quantification was 98% (95% CI, 96%–99%) and 100% (95% CI, 95%–100%).
Conclusions
Excellent diagnostic accuracy was observed across assays for detection of HCV RNA from finger-stick and DBS samples. The proportion of invalid results following Xpert HCV Viral Load Fingerstick testing highlights the importance of operator training and quality assurance programs.
HCV active infection can be accurately detected by assays that utilize point-of-care testing or dried blood spot samples for the determination of HCV RNA.
Pharmaceutical claims data are often used as the primary information source to define medicine exposure periods in pharmacoepidemiological studies. However, often critical information on directions ...for use and the intended duration of medicine supply are not available. In the absence of this information, alternative approaches are needed to support the assignment of exposure periods. This study summarises the key methods commonly used to estimate medicine exposure periods and dose from pharmaceutical claims data; and describes a method using individualised dispensing patterns to define time‐dependent estimates of medicine exposure and dose. This method extends on important features of existing methods and also accounts for recent changes in an individual's medicine use. Specifically, this method constructs medicine exposure periods and estimates the dose used by considering characteristics from an individual's prior dispensings, accounting for the time between prior dispensings and the amount supplied at prior dispensings. Guidance on the practical applications of this method is also provided. Although developed primarily for application to databases, which do not contain duration of supply or dose information, use of this method may also facilitate investigations when such information is available and there is a need to consider individualised and/or changing dosing regimens. By shifting the reliance on prescribed duration and dose to determine exposure and dose estimates, individualised dispensing information is used to estimate patterns of exposure and dose for an individual. Reflecting real‐world individualised use of medicines with complex and variable dosing regimens, this method offers a pragmatic approach that can be applied to all medicine classes.
Background and Aims
The individual‐level effectiveness of opioid agonist treatment (OAT) in reducing mortality is well established, but there is less evidence on population‐level benefits. We use ...modeling informed with linked data from the OAT program in New South Wales (NSW), Australia, to estimate the impact of OAT provision in the community and prisons on mortality and the impact of eliminating excess mortality during OAT initiation/discontinuation.
Design
Dynamic modeling.
Setting and participants
A cohort of 49 359 individuals who ever received OAT in NSW from 2001 to 2018.
Measurements
Receipt of OAT was represented through five stages: (i) first month on OAT, (ii) short (1–9 months) and (iii) longer (9+ months) duration on OAT, (iv) first month following OAT discontinuation and (v) rest of time following OAT discontinuation. Incarceration was represented as four strata: (i) never or not incarcerated in the past year, (ii) currently incarcerated, (iii) released from prison within the past month and (iv) released from prison 1–12 months ago. The model incorporated elevated mortality post‐release from prison and OAT impact on reducing mortality and incarceration.
Findings
Among the cohort, mortality was 0.9 per 100 person‐years, OAT coverage and retention remained high (> 50%, 1.74 years/episode). During 2001–20, we estimate that OAT provision reduced overdose and other cause mortality among the cohort by 52.8% 95% credible interval (CrI) = 49.4–56.9% and 26.6% (95% CrI =22.1–30.5%), respectively. We estimate 1.2 deaths averted and 9.7 life‐years gained per 100 person‐years on OAT. Prison OAT with post‐release OAT‐linkage accounted for 12.4% (95% CrI = 11.5–13.5%) of all deaths averted by the OAT program, primarily through preventing deaths in the first month post‐release. Preventing elevated mortality during OAT initiation and discontinuation could have averted up to 1.4% (95% CrI = 0.8–2.0%) and 3.0% (95% CrI = 2.1–5.3%) of deaths, respectively.
Conclusion
The community and prison opioid agonist treatment program in New South Wales, Australia appears to have substantially reduced population‐level overdose and all‐cause mortality in the past 20 years, partially due to high retention.
Background
Staging of axillary lymph nodes in breast cancer is important for prognostication and planning of adjuvant therapy. The traditional practice of proceeding to axillary lymph node dissection ...(ALND) if sentinel lymph node biopsy (SLNB) is positive is being challenged and clinical trials are underway. For many centres, this will mean a move away from intra‐operative SLNB assessment and utilization of a second procedure to perform ALND. It is sometimes perceived that a delayed ALND results in increased tissue damage and thus increased morbidity. We compared morbidity in those undergoing SLNB only, or ALND as a one‐ or two‐stage procedure.
Methods
A retrospective review of a prospectively collected institutional database was used to review rates of lymphoedema and shoulder function in women undergoing breast cancer surgery between 2008 and 2012.
Results
The overall lymphoedema rate in 745 patients was 8.2% at 12 months. There was no difference in lymphoedema rates between those undergoing immediate or delayed ALND (17.8 and 8.6%, respectively, P = 0.092). Post‐operative shoulder elevation, odds ratio (OR) = 0.390, 95% confidence interval (CI) = (0.218, 0.698) and abduction, OR = 0.437 (95% CI = (0.271, 0.705)) were reduced if an ALND was performed although there was no difference between immediate or delayed.
Conclusion
ALND remains a risk factor for post‐operative morbidity. There is no increased risk of lymphoedema or shoulder function deficit with a positive SLNB and delayed ALND compared to immediate ALND.
Introduction
For people accessing treatment for problems with drugs other than opioids, little is known about the relationship between treatment and mortality risk, nor how mortality risk varies ...across treatment modalities. We addressed these evidence gaps by determining mortality rates during and after treatment for people accessing a range of treatment modalities for several drugs of concern.
Methods
We conducted a cohort study using linked data on publicly funded specialist alcohol or other drug treatment service use and mortality for people receiving treatment in New South Wales between January 2012 and December 2018. We calculated and compared during‐treatment and post‐treatment crude mortality rates and age‐ and sex‐standardised mortality rates, separately for each principal drug of concern and modality.
Results
Over the study period, 45,026 people accessed treatment for problems with alcohol, 26,407 for amphetamine‐type stimulants, 23,047 for cannabinoids and 21,556 for opioids. People treated for alcohol or opioid problems had higher crude mortality rates (1.48, 1.91, 1.09 per 100 person years, respectively) than those with problems with amphetamine‐type stimulants or cannabinoids (0.46, 0.30 per 100 person years, respectively). Mortality rates differed according to treatment status and modality only among people with alcohol or opioid problems.
Discussion and Conclusions
The observed variation in mortality rates indicates there is scope to reduce mortality among people accessing treatment with alcohol or opioid problems. Future research on mortality among people accessing drug and alcohol treatment should account for the variation in mortality by drug of concern and treatment modality.
A need exists to accurately estimate overdose risk and improve understanding of how to deliver treatments and interventions in people with opioid use disorder in a way that reduces such risk. We ...consider opportunities for predictive analytics and routinely collected administrative data to evaluate how overdose could be reduced among people with opioid use disorder. Specifically, we summarise global trends in opioid use and overdoses; describe the use of big data in research into opioid overdose; consider the potential for predictive modelling, including machine learning, for prevention and monitoring of opioid overdoses; and outline the challenges and risks relating to the use of big data and machine learning in reducing harms that are related to opioid use. Future research for improving the coverage and provision of existing interventions, treatments, and resources for opioid use disorder requires collaboration of multiple agencies. Predictive modelling could transport the concept of stratified medicine to public health through novel methods, such as predictive modelling and emulated trials for evaluating diagnoses and prognoses of opioid use disorder, predicting treatment response, and providing targeted treatment recommendations.
Aim
Transfer from pediatric to adult services could lead to clinical deterioration, few studies have examined this. We sought to examine the clinical impact of a structured individualized transition ...and transfer process in patients with cystic fibrosis (CF).
Methods
Medical records of all patients with CF in Western Australia who transferred from a pediatric center (Princess Margaret Hospital for Children) to an adult CF center (Sir Charles Gairdner Hospital) between 2008 and 2012 were reviewed. Data were extracted for 2 years before and after transfer. The number of CF outpatient visits, inpatient days, and home intravenous antibiotic therapy (HIVT) days were recorded at yearly intervals before and after transfer. Sputum culture results at transfer were collected. All respiratory function and anthropometric data over the 4 years were extracted.
Results
Forty‐two patients with CF were transferred between 2008 and 2012. The mean age at transfer was 18.9 years (range 17–22). Compared to 1‐year pre‐transfer, the frequency of outpatient visits at 1‐ and 2‐year post‐transfer increased. After transfer, there was no change in BMI, HIVT days, or inpatient days, and no acceleration in the expected decline in FEV1.
Conclusion
This study found that transfer from a pediatric to an adult CF center using a structured, individualized transition and transfer process was not associated with accelerated clinical deterioration.
Purpose
Medicine dispensing data require extensive preparation when used for research and decisions during this process may lead to results that do not replicate between independent studies. We ...conducted an experiment to examine the impact of these decisions on results of a study measuring discontinuation, intensification, and switching in a cohort of patients initiating metformin.
Methods
Four Australian sites independently developed a HARmonized Protocol template to Enhance Reproducibility (HARPER) protocol and executed their analyses using the Australian Pharmaceutical Benefits Scheme 10% sample dataset. Each site calculated cohort size and demographics and measured treatment events including discontinuation, switch to another diabetes medicine, and intensification (addition of another diabetes medicine). Time to event and hazard ratios for associations between cohort characteristics and each event were also calculated. Concordance was assessed by measuring deviations from the calculated median of each value across the sites.
Results
Good agreement was found across sites for the number of initiators (median: 53 127, range: 51 848–55 273), gender (56.9% female, range: 56.8%–57.1%) and age group. Each site employed different methods for estimating days supply and used different operational definitions for the treatment events. Consequently, poor agreement was found for incidence of discontinuation (median 55%, range: 34%–67%), switching (median 3.5%, range: 1%–7%), intensification (median 8%, range: 5%–12%), time to event estimates and hazard ratios.
Conclusions
Differences in analytical decisions when deriving exposure from dispensing data affect replicability. Detailed analytical protocols, such as HARPER, are critical for transparency of operational definitions and interpretations of key study parameters.