•As of December 31, 2022, 71.0% and 71.3% of children born in 2016 and 2017, respectively, were up to date on their routine vaccinations by two years of age compared to 69.1%, 64.7% and 60.6% for ...children born in 2018, 2019, and 2020, respectively.•There was a slight, but steady, decrease in vaccination coverage prior to the pandemic, but the percentage of children up to date on routine vaccinations declined markedly during the COVID-19 pandemic.•Efforts should be made to ensure that all children are up to date on vaccinations to prevent outbreaks of vaccine-preventable diseases.
Routine vaccinations are key to prevent outbreaks of vaccine-preventable diseases. However, there have been documented declines in routine childhood vaccinations in the U.S. and worldwide during the COVID-19 pandemic.
Assess how the COVID-19 pandemic impacted routine childhood vaccinations by evaluating vaccination coverage for routine childhood vaccinations for children born in 2016–2021.
Data on routine childhood vaccinations reported to CDC by nine U.S. jurisdictions via the immunization information systems (IISs) by December 31, 2022, were available for analyses. Population size for each age group was obtained from the National Center for Health Statistics’ Bridging Population Estimates.
Vaccination coverage for routine childhood vaccinations at age three months, five months, seven months, one year, and two years was calculated by vaccine type and overall, for 4:3:1:3:3:1:4 series (≥4 doses DTaP, ≥3 doses Polio, ≥1 dose MMR, ≥3 doses Hib, ≥3 doses Hepatitis B, ≥1 dose Varicella, and ≥ 4 doses pneumococcal conjugate), for each birth cohort year and by jurisdiction.
Overall, there was a 10.4 percentage point decrease in the 4:3:1:3:3:1:4 series in those children born in 2020 compared to those children born in 2016. As of December 31, 2022, 71.0% and 71.3% of children born in 2016 and 2017, respectively, were up to date on their routine childhood vaccinations by two years of age compared to 69.1%, 64.7% and 60.6% for children born in 2018, 2019, and 2020, respectively.
The decline in vaccination coverage for routine childhood vaccines is concerning. In order to protect population health, strategic efforts are needed by health care providers, schools, parents, as well as state, local, and federal governments to work together to address these declines in vaccination coverage during the COVID-19 pandemic to prevent outbreaks of vaccine preventable diseases by maintaining high levels of population immunity.
Summary The aim of diagnostic point-of-care testing is to minimise the time to obtain a test result, thereby allowing clinicians and patients to make a quick clinical decision. Because point-of-care ...tests are used in resource-limited settings, the benefits need to outweigh the costs. To optimise point-of-care testing in resource-limited settings, diagnostic tests need rigorous assessments focused on relevant clinical outcomes and operational costs, which differ from assessments of conventional diagnostic tests. We reviewed published studies on point-of-care testing in resource-limited settings, and found no clearly defined metric for the clinical usefulness of point-of-care testing. Therefore, we propose a framework for the assessment of point-of-care tests, and suggest and define the term test efficacy to describe the ability of a diagnostic test to support a clinical decision within its operational context. We also propose revised criteria for an ideal diagnostic point-of-care test in resource-limited settings. Through systematic assessments, comparisons between centralised testing and novel point-of-care technologies can be more formalised, and health officials can better establish which point-of-care technologies represent valuable additions to their clinical programmes.
•A snow accounting routine with only two free parameters can provide reliable results.•A parsimonious snow routine performs as well as other more complex ones.•Performance on mountainous catchment ...can still be improved.
This paper investigates the degree of complexity required in a snow accounting routine to ultimately simulate flows at the catchment outlet. We present a simple, parsimonious and general snow accounting routine (SAR), called Cemaneige, that can be associated with any precipitation-runoff model to simulate discharge at the catchment scale. To get results of general applicability, this SAR was tested on a large set of 380 catchments from four countries (France, Switzerland, Sweden and Canada) and combined with four different hydrological models.
Our results show that five basic features provide a good reliability and robustness to the SAR, namely considering: (1) a transition range of temperature for the determination of the solid fraction of precipitation; (2) five altitudinal bands of equal area for snow accumulation; (3) the cold-content of the snowpack (with a parameter controlling snowpack inertia); (4) a degree-day factor controlling snowmelt; (5) uneven snow distribution in each band. This general SAR includes two internal states (the snowpack and its cold-content). Results also indicate that only two free parameters (snowmelt factor and cold-content factor) are warranted in a SAR at the daily time step and that further complexity is not supported by improvements in flow simulation efficiency.
To justify the reasons for considering the five features above, a sensitivity analysis comparing Cemaneige with other SAR versions is performed. It analyses the snow processes which should be selected or not to bring significant improvement in model performances.
Compared with the six existing SARs presented in the companion article (Valéry et al., 2014) on the 380 catchments set, Cemaneige shows better performance on average than five of these six SARs. It provides performance similar to the sixth SAR (MORD4) but with only half its number of free parameters. However, CemaNeige still appears perfectible on mountainous catchments (France and Switzerland) where the lumped SAR, MORD4, outperforms Cemaneige.
Cemaneige can easily be adapted for simulation on ungauged catchments: fixing its two parameters to default values much less degrades performances than the other best performing SAR. This may partly due to the Cemaneige parsimony.
The promise of point-of-care medical diagnostics - tests that can be carried out at the site of patient care - is enormous, bringing the benefits of fast and reliable testing and allowing rapid ...decisions on the course of treatment to be made. To this end, much innovation is occurring in technologies for use in biodiagnostic tests. Assays based on nanomaterials, for example, are now beginning to make the transition from the laboratory to the clinic. But the potential for such assays to become part of routine medical testing depends on many scientific factors, including sensitivity, selectivity and versatility, as well as technological, financial and policy factors.
Latent class models (LCMs) combine the results of multiple diagnostic tests through a statistical model to obtain estimates of disease prevalence and diagnostic test accuracy in situations where ...there is no single, accurate reference standard. We performed a systematic review of the methodology and reporting of LCMs in diagnostic accuracy studies. This review shows that the use of LCMs in such studies increased sharply in the past decade, notably in the domain of infectious diseases (overall contribution: 59%). The 64 reviewed studies used a range of differently specified parametric latent variable models, applying Bayesian and frequentist methods. The critical assumption underlying the majority of LCM applications (61%) is that the test observations must be independent within 2 classes. Because violations of this assumption can lead to biased estimates of accuracy and prevalence, performing and reporting checks of whether assumptions are met is essential. Unfortunately, our review shows that 28% of the included studies failed to report any information that enables verification of model assumptions or performance. Because of the lack of information on model fit and adequate evidence "external" to the LCMs, it is often difficult for readers to judge the validity of LCM-based inferences and conclusions reached.
Purpose
This study aims to investigate the impact of the innovative ritual-based redesign of a routine in the challenging context of the dining-out sector, characterized by low employee commitment ...and high turnover.
Design/methodology/approach
This study adopts a mixed methods experimental design. This study focuses on a field experiment in a real restaurant centered on the restaurant’s welcome entrée routine. The routine is first observed as it happens, after which it is redesigned as a ritual.
Findings
The ritual-based redesign of the routine enhances employee sharing of the purpose of the routine and reduces the variability of the execution time of the routine, which increases group cohesion among the restaurant staff. Besides the positive impact on the routine’s participants, the ritual-based redesign has a beneficial effect on the performance of the routine by increasing the enjoyment of the end-consumers at the restaurant.
Research limitations/implications
The ritual-based redesign of routines is a powerful managerial tool that bonds workers into a solidary community characterized by strong and shared values. This allows guidance of the behavior of new and existing employees in a more efficient and less time-consuming way.
Originality/value
Rituals have been traditionally analyzed from the customer perspective as marketing tools. This research investigates the employees’ perspective, leveraging ritual-based redesign as a managerial tool for increasing cohesion among workers.
The developing world does not have access to many of the best medical diagnostic technologies; they were designed for air-conditioned laboratories, refrigerated storage of chemicals, a constant ...supply of calibrators and reagents, stable electrical power, highly trained personnel and rapid transportation of samples. Microfluidic systems allow miniaturization and integration of complex functions, which could move sophisticated diagnostic tools out of the developed-world laboratory. These systems must be inexpensive, but also accurate, reliable, rugged and well suited to the medical and social contexts of the developing world.
Background
Specific diagnostic tests to detect severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) and resulting COVID‐19 disease are not always available and take time to obtain results. ...Routine laboratory markers such as white blood cell count, measures of anticoagulation, C‐reactive protein (CRP) and procalcitonin, are used to assess the clinical status of a patient. These laboratory tests may be useful for the triage of people with potential COVID‐19 to prioritize them for different levels of treatment, especially in situations where time and resources are limited.
Objectives
To assess the diagnostic accuracy of routine laboratory testing as a triage test to determine if a person has COVID‐19.
Search methods
On 4 May 2020 we undertook electronic searches in the Cochrane COVID‐19 Study Register and the COVID‐19 Living Evidence Database from the University of Bern, which is updated daily with published articles from PubMed and Embase and with preprints from medRxiv and bioRxiv. In addition, we checked repositories of COVID‐19 publications. We did not apply any language restrictions.
Selection criteria
We included both case‐control designs and consecutive series of patients that assessed the diagnostic accuracy of routine laboratory testing as a triage test to determine if a person has COVID‐19. The reference standard could be reverse transcriptase polymerase chain reaction (RT‐PCR) alone; RT‐PCR plus clinical expertise or and imaging; repeated RT‐PCR several days apart or from different samples; WHO and other case definitions; and any other reference standard used by the study authors.
Data collection and analysis
Two review authors independently extracted data from each included study. They also assessed the methodological quality of the studies, using QUADAS‐2. We used the 'NLMIXED' procedure in SAS 9.4 for the hierarchical summary receiver operating characteristic (HSROC) meta‐analyses of tests for which we included four or more studies. To facilitate interpretation of results, for each meta‐analysis we estimated summary sensitivity at the points on the SROC curve that corresponded to the median and interquartile range boundaries of specificities in the included studies.
Main results
We included 21 studies in this review, including 14,126 COVID‐19 patients and 56,585 non‐COVID‐19 patients in total. Studies evaluated a total of 67 different laboratory tests. Although we were interested in the diagnotic accuracy of routine tests for COVID‐19, the included studies used detection of SARS‐CoV‐2 infection through RT‐PCR as reference standard. There was considerable heterogeneity between tests, threshold values and the settings in which they were applied. For some tests a positive result was defined as a decrease compared to normal vaues, for other tests a positive result was defined as an increase, and for some tests both increase and decrease may have indicated test positivity. None of the studies had either low risk of bias on all domains or low concerns for applicability for all domains. Only three of the tests evaluated had a summary sensitivity and specificity over 50%. These were: increase in interleukin‐6, increase in C‐reactive protein and lymphocyte count decrease.
Blood count
Eleven studies evaluated a decrease in white blood cell count, with a median specificity of 93% and a summary sensitivity of 25% (95% CI 8.0% to 27%; very low‐certainty evidence). The 15 studies that evaluated an increase in white blood cell count had a lower median specificity and a lower corresponding sensitivity. Four studies evaluated a decrease in neutrophil count. Their median specificity was 93%, corresponding to a summary sensitivity of 10% (95% CI 1.0% to 56%; low‐certainty evidence). The 11 studies that evaluated an increase in neutrophil count had a lower median specificity and a lower corresponding sensitivity. The summary sensitivity of an increase in neutrophil percentage (4 studies) was 59% (95% CI 1.0% to 100%) at median specificity (38%; very low‐certainty evidence). The summary sensitivity of an increase in monocyte count (4 studies) was 13% (95% CI 6.0% to 26%) at median specificity (73%; very low‐certainty evidence). The summary sensitivity of a decrease in lymphocyte count (13 studies) was 64% (95% CI 28% to 89%) at median specificity (53%; low‐certainty evidence). Four studies that evaluated a decrease in lymphocyte percentage showed a lower median specificity and lower corresponding sensitivity. The summary sensitivity of a decrease in platelets (4 studies) was 19% (95% CI 10% to 32%) at median specificity (88%; low‐certainty evidence).
Liver function tests
The summary sensitivity of an increase in alanine aminotransferase (9 studies) was 12% (95% CI 3% to 34%) at median specificity (92%; low‐certainty evidence). The summary sensitivity of an increase in aspartate aminotransferase (7 studies) was 29% (95% CI 17% to 45%) at median specificity (81%) (low‐certainty evidence). The summary sensitivity of a decrease in albumin (4 studies) was 21% (95% CI 3% to 67%) at median specificity (66%; low‐certainty evidence). The summary sensitivity of an increase in total bilirubin (4 studies) was 12% (95% CI 3.0% to 34%) at median specificity (92%; very low‐certainty evidence).
Markers of inflammation
The summary sensitivity of an increase in CRP (14 studies) was 66% (95% CI 55% to 75%) at median specificity (44%; very low‐certainty evidence). The summary sensitivity of an increase in procalcitonin (6 studies) was 3% (95% CI 1% to 19%) at median specificity (86%; very low‐certainty evidence). The summary sensitivity of an increase in IL‐6 (four studies) was 73% (95% CI 36% to 93%) at median specificity (58%) (very low‐certainty evidence).
Other biomarkers
The summary sensitivity of an increase in creatine kinase (5 studies) was 11% (95% CI 6% to 19%) at median specificity (94%) (low‐certainty evidence). The summary sensitivity of an increase in serum creatinine (four studies) was 7% (95% CI 1% to 37%) at median specificity (91%; low‐certainty evidence). The summary sensitivity of an increase in lactate dehydrogenase (4 studies) was 25% (95% CI 15% to 38%) at median specificity (72%; very low‐certainty evidence).
Authors' conclusions
Although these tests give an indication about the general health status of patients and some tests may be specific indicators for inflammatory processes, none of the tests we investigated are useful for accurately ruling in or ruling out COVID‐19 on their own. Studies were done in specific hospitalized populations, and future studies should consider non‐hospital settings to evaluate how these tests would perform in people with milder symptoms.
Progress feedback is an intervention aimed at enhancing patient outcomes in routine clinical practice. This study reports a comprehensive multilevel meta-analysis on the effectiveness of progress ...feedback in psychological treatments in curative care. The short- and long-term effects of feedback on symptom reduction were investigated using 58 (randomized and non-randomized) studies, analyzing 110 effect sizes in a total of 21,699 patients. Effects of feedback on dropout rate, percentage of deteriorated cases, and treatment duration were also examined. Moderation analyses were conducted for study and feedback characteristics. A small significant effect of progress feedback on symptom reduction (d = 0.15, 95% CI: 0.10, 0.20) was found, compared to control groups. This was also true for not-on-track cases (d = 0.17, 95% CI: 0.11, 0.22). In addition, feedback had a small favorable effect on dropout rates (OR = 1.19, 95% CI: 1.03, 1.38). The moderation analyses identified several potentially interesting variables for further research, including feedback instrument, outcome instrument, type of feedback, feedback frequency, treatment intensity, and country in which the study was conducted. Future studies should report on these variables more consistently so that we can obtain a better understanding of when and why feedback improves outcomes.
•Feedback on patient's progress in psychotherapy improves symptom reduction.•Dropout is reduced by 20% when progress feedback is used.•No effects found on treatment duration and the percentage of deteriorated cases.•Effects were moderated by outcome and feedback instrument, country, and study year.•Some feedback instruments seem to have specific effects in certain subsamples.