Challenging behavior represents a core symptom in neuropathological mucopolysaccharidoses (MPS) and puts major strain on affected families. Although multimodal approaches including behavioral ...strategies to treatment could be valuable, there is lack of research to the effectiveness of specific measures. This explorative, cross-sectional study is aimed at the collection of parental experiences regarding effective day-to-day measures against challenging behavior in MPS and focuses on 4 major research questions: First: What is challenging behavior in MPS? Second: Which strategies are helpful in the day-to-day coping with challenging behavior? Third: How strong is parental acceptance of illness and the disorder's impact on family relationships? Fourth: What are beneficial personal and interfamilial strategies for generally coping with the disorder?
A semi structured questionnaire was designed de novo in cooperation with affected families. 37/268 questionnaires were returned (rate: 13.8%), of which 34 (MPS I: n = 8, MPS II: n = 8; MPS III: n = 18) could be included in data analysis in accordance with inclusion criteria. Assessment of challenging symptoms was based on perceived frequency, parent- and child stress. Exploration of possible coping strategies for challenging behavior and general illness-related strain included the evaluation of perceived effectiveness. Questionnaires were completed by patient's relatives and analyzed for strategies to cope with challenging behavior and the disorder's impact. STROBE criteria were respected.
MPS I was reported to show lower frequency and better perceived manageability of challenging behavior than MPS II and -III. Sleep disturbance, hyperactivity, agitation, aggression and orality seemed relevant symptoms regarding frequency and/or parent stress. Reported measures were manifold, worthwhile approaches against challenging behavior appeared to be aiming at distraction, relief and environmental changes. Medication and non-medication approaches were rated similarly effective. Social exchange, private space and networking with other affected families seemed highly important for personal and interfamilial well-being.
Multimodal mentoring for affected families could be based on the following equivalent pillars: (1) Medication therapy for challenging behavior including evaluation of cost and benefit (2) Guided implementation and re-evaluation of specific behavioral measures against challenging behavior. (3) Psychosocial support of MPS-families, including options for strengthening parental well-being and family functioning. Trial registration This study was registered at clinicaltrials.gov prior to study start (NCT-Number: NCT03161171, Date: 2017/05/19).
Even though adaptive two-stage designs with unblinded interim analyses are becoming increasingly popular in clinical trial designs, there is a lack of statistical software to make their application ...more straightforward. The package adoptr fills this gap for the common case of two-stage one- or two-arm trials with (approximately) normally distributed outcomes. In contrast to previous approaches, adoptr optimizes the entire design upfront which allows maximal efficiency. To facilitate experimentation with different objective functions, adoptr supports a flexible way of specifying both (composite) objective scores and (conditional) constraints by the user. Special emphasis was put on providing measures to aid practitioners with the validation process of the package.
Adaptive enrichment designs are an attractive option for clinical trials that aim at demonstrating efficacy of therapies, which may show different benefit for the full patient population and a ...prespecified subgroup. In these designs, based on interim data, either the subgroup or the full population is selected for further exploration. When selection is based on efficacy data, this introduces bias to the commonly used maximum likelihood estimator. For the situation of two‐stage designs with a single prespecified subgroup, we present six alternative estimators and investigate their performance in a simulation study. The most consistent reduction of bias over the range of scenarios considered was achieved by a method combining the uniformly minimum variance conditionally unbiased estimator with a conditional moment estimator. Application of the methods is illustrated by a clinical trial example.
The aim of this study is to quantify the hospital burden of COVID-19 during the first wave and how it changed over calendar time; to interpret the results in light of the emergency measures ...introduced to manage the strain on secondary healthcare.
This is a cohort study of hospitalised confirmed cases of COVID-19 admitted from February-June 2020 and followed up till 17th July 2020, analysed using a mixture multi-state model. All hospital patients with confirmed COVID-19 disease in Regione Lombardia were involved, admitted from February-June 2020, with non-missing hospital of admission and non-missing admission date.
The cohort consists of 40,550 patients hospitalised during the first wave. These patients had a median age of 69 (interquartile range 56-80) and were more likely to be men (60%) than women (40%). The hospital-fatality risk, averaged over all pathways through hospital, was 27.5% (95% CI 27.1-28.0%); and steadily decreased from 34.6% (32.5-36.6%) in February to 7.6% (6.3-10.6%) in June. Among surviving patients, median length of stay in hospital was 11.8 (11.6-12.3) days, compared to 8.1 (7.8-8.5) days in non-survivors. Averaged over final outcomes, median length of stay in hospital decreased from 21.4 (20.5-22.8) days in February to 5.2 (4.7-5.8) days in June.
The hospital burden, in terms of both risks of poor outcomes and lengths of stay in hospital, has been demonstrated to have decreased over the months of the first wave, perhaps reflecting improved treatment and management of COVID-19 cases, as well as reduced burden as the first wave waned. The quantified burden allows for planning of hospital beds needed for current and future waves of SARS-CoV-2 i.
Spreading depolarizations (SD) are characterized by breakdown of transmembrane ion gradients and excitotoxicity. Experimentally, N-methyl-D-aspartate receptor (NMDAR) antagonists block a majority of ...SDs. In many hospitals, the NMDAR antagonist s-ketamine and the GABA
agonist midazolam represent the current second-line combination treatment to sedate patients with devastating cerebral injuries. A pressing clinical question is whether this option should become first-line in sedation-requiring individuals in whom SDs are detected, yet the s-ketamine dose necessary to adequately inhibit SDs is unknown. Moreover, use-dependent tolerance could be a problem for SD inhibition in the clinic.
We performed a retrospective cohort study of 66 patients with aneurysmal subarachnoid hemorrhage (aSAH) from a prospectively collected database. Thirty-three of 66 patients received s-ketamine during electrocorticographic neuromonitoring of SDs in neurointensive care. The decision to give s-ketamine was dependent on the need for stronger sedation, so it was expected that patients receiving s-ketamine would have a worse clinical outcome.
S-ketamine application started 4.2 ± 3.5 days after aSAH. The mean dose was 2.8 ± 1.4 mg/kg body weight (BW)/h and thus higher than the dose recommended for sedation. First, patients were divided according to whether they received s-ketamine at any time or not. No significant difference in SD counts was found between groups (negative binomial model using the SD count per patient as outcome variable, p = 0.288). This most likely resulted from the fact that 368 SDs had already occurred in the s-ketamine group before s-ketamine was given. However, in patients receiving s-ketamine, we found a significant decrease in SD incidence when s-ketamine was started (Poisson model with a random intercept for patient, coefficient - 1.83 (95% confidence intervals - 2.17; - 1.50), p < 0.001; logistic regression model, odds ratio (OR) 0.13 (0.08; 0.19), p < 0.001). Thereafter, data was further divided into low-dose (0.1-2.0 mg/kg BW/h) and high-dose (2.1-7.0 mg/kg/h) segments. High-dose s-ketamine resulted in further significant decrease in SD incidence (Poisson model, - 1.10 (- 1.71; - 0.49), p < 0.001; logistic regression model, OR 0.33 (0.17; 0.63), p < 0.001). There was little evidence of SD tolerance to long-term s-ketamine sedation through 5 days.
These results provide a foundation for a multicenter, neuromonitoring-guided, proof-of-concept trial of ketamine and midazolam as a first-line sedative regime.
Sample size derivation is a crucial element of planning any confirmatory trial. The required sample size is typically derived based on constraints on the maximal acceptable Type I error rate and ...minimal desired power. Power depends on the unknown true effect and tends to be calculated either for the smallest relevant effect or a likely point alternative. The former might be problematic if the minimal relevant effect is close to the null, thus requiring an excessively large sample size, while the latter is dubious since it does not account for the a priori uncertainty about the likely alternative effect. A Bayesian perspective on sample size derivation for a frequentist trial can reconcile arguments about the relative a priori plausibility of alternative effects with ideas based on the relevance of effect sizes. Many suggestions as to how such "hybrid" approaches could be implemented in practice have been put forward. However, key quantities are often defined in subtly different ways in the literature. Starting from the traditional entirely frequentist approach to sample size derivation, we derive consistent definitions for the most commonly used hybrid quantities and highlight connections, before discussing and demonstrating their use in sample size derivation for clinical trials.
Objective To evaluate and compare the effects of tooth-borne and bone-borne distraction devices in surgically assisted maxillary expansion (SARME) on dental and skeletal structures. Study Design A ...sample of 33 skeletally mature patients with transverse maxillary deficiencies was examined with cone beam computed tomography (CBCT) before and 3 months after surgery. Fourteen patients were treated with tooth-borne devices and 19 patients with bone-borne devices. Results Dental crown expansion in the first premolars did not differ significantly between the two groups, and median expansion was 5.55 mm (interquartile range IQR 5.23) in the tooth-borne device group and 4.6 mm (IQR 3.4) in the bone-borne device group. In the first molars, crown expansion and lateral tipping were significantly greater in the tooth-borne device group ( P ≤ .02). The median skeletal nasal isthmus increase was significantly more in the bone-borne device group at 3.0 mm than in the tooth-borne device group at 0.98 mm ( P ≤ .02). Conclusions Both tooth-borne and bone-borne devices are effective treatment modalities to correct maxillary transverse deficiencies. Bone-borne devices produced greater widening of the skeletal nasal floor and fewer dental side effects in the first molars.
Understanding the risk factors associated with hospital burden of COVID-19 is crucial for healthcare planning for any future waves of infection.
An observational cohort study is performed, using data ...on all PCR-confirmed cases of COVID-19 in Regione Lombardia, Italy, during the first wave of infection from February-June 2020. A multi-state modelling approach is used to simultaneously estimate risks of progression through hospital to final outcomes of either death or discharge, by pathway (via critical care or not) and the times to final events (lengths of stay). Logistic and time-to-event regressions are used to quantify the association of patient and population characteristics with the risks of hospital outcomes and lengths of stay respectively.
Risks of severe outcomes such as ICU admission and mortality have decreased with month of admission (for example, the odds ratio of ICU admission in June vs March is 0.247 0.120-0.508) and increased with age (odds ratio of ICU admission in 45-65 vs 65 + age group is 0.286 0.201-0.406). Care home residents aged 65 + are associated with increased risk of hospital mortality and decreased risk of ICU admission. Being a healthcare worker appears to have a protective association with mortality risk (odds ratio of ICU mortality is 0.254 0.143-0.453 relative to non-healthcare workers) and length of stay. Lengths of stay decrease with month of admission for survivors, but do not appear to vary with month for non-survivors.
Improvements in clinical knowledge, treatment, patient and hospital management and public health surveillance, together with the waning of the first wave after the first lockdown, are hypothesised to have contributed to the reduced risks and lengths of stay over time.
After traumatic spinal cord injury, an acute phase triggered by trauma is followed by a subacute phase involving inflammatory processes. We previously demonstrated that peripheral serum cytokine ...expression changes depend on neurological outcome after spinal cord injury. In a subsequent intermediate phase, repair and remodeling takes place under the mediation of growth factors such as Insulin-like Growth Factor 1 (IGF-1). IGF-1 is a promising growth factor which is thought to act as a neuroprotective agent. Since previous findings were taken from animal studies, our aim was to investigate this hypothesis in humans based on peripheral blood serum. Forty-five patients after traumatic spinal cord injury were investigated over a period of three months after trauma. Blood samples were taken according to a fixed schema and IGF-1 levels were determined. Clinical data including AIS scores at admission to the hospital and at discharge were collected and compared with IGF-1 levels. In our study, we could observe distinct patterns in the expression of IGF-1 in peripheral blood serum after traumatic spinal cord injury regardless of the degree of plegia. All patients showed a marked increase of levels seven days after injury. IGF-1 serum levels were significantly different from initial measurements at four and nine hours and seven and 14 days after injury, as well as one, two and three months after injury. We did not detect a significant correlation between fracture and the IGF-1 serum level nor between the quantity of operations performed after trauma and the IGF-1 serum level. Patients with clinically documented neurological remission showed consistently higher IGF-1 levels than patients without neurological remission. This data could be the base for the establishment of animal models for further and much needed research in the field of spinal cord injury.