Antiretroviral medications that are used as prophylaxis can prevent acquisition of human immunodeficiency virus type 1 (HIV-1) infection. However, in clinical trials among African women, the ...incidence of HIV-1 infection was not reduced, probably because of low adherence. Longer-acting methods of drug delivery, such as vaginal rings, may simplify use of antiretroviral medications and provide HIV-1 protection.
We conducted a phase 3, randomized, double-blind, placebo-controlled trial of a monthly vaginal ring containing dapivirine, a non-nucleoside HIV-1 reverse-transcriptase inhibitor, involving women between the ages of 18 and 45 years in Malawi, South Africa, Uganda, and Zimbabwe.
Among the 2629 women who were enrolled, 168 HIV-1 infections occurred: 71 in the dapivirine group and 97 in the placebo group (incidence, 3.3 and 4.5 per 100 person-years, respectively). The incidence of HIV-1 infection in the dapivirine group was lower by 27% (95% confidence interval CI, 1 to 46; P=0.046) than that in the placebo group. In an analysis that excluded data from two sites that had reduced rates of retention and adherence, the incidence of HIV-1 infection in the dapivirine group was lower by 37% (95% CI, 12 to 56; P=0.007) than that in the placebo group. In a post hoc analysis, higher rates of HIV-1 protection were observed among women over the age of 21 years (56%; 95% CI, 31 to 71; P<0.001) but not among those 21 years of age or younger (-27%; 95% CI, -133 to 31; P=0.45), a difference that was correlated with reduced adherence. The rates of adverse medical events and antiretroviral resistance among women who acquired HIV-1 infection were similar in the two groups.
A monthly vaginal ring containing dapivirine reduced the risk of HIV-1 infection among African women, with increased efficacy in subgroups with evidence of increased adherence. (Funded by the National Institutes of Health; ClinicalTrials.gov number, NCT01617096 .).
Juvenile and adult reef fishes often undergo migration, ontogenic habitat shifts, and nocturnal foraging movements. The orientation cues used for these behaviours are largely unknown. In this study, ...the use of sound as an orientation cue guiding the nocturnal movements of adult and juvenile reef fishes at Lizard Island, Great Barrier Reef was examined. The first experiment compared the movements of fishes to small patch reefs where reef noise was broadcast, with those to silent reefs. No significant responses were found in the 79 adults that were collected, but the 166 juveniles collected showed an increased diversity each morning on the reefs with broadcast noise, and significantly greater numbers of juveniles from three taxa (Apogonidae, Gobiidae and Pinguipedidae) were collected from reefs with broadcast noise. The second experiment compared the movement of adult and juvenile fishes to reefs broadcasting high (>570 Hz), or low (<570 Hz) frequency reef noise, or to silent reefs. Of the 122 adults collected, the highest diversity was seen at the low frequency reefs; and adults from two families (Gobiidae and Blenniidae) preferred these reefs. A similar trend was observed in the 382 juveniles collected, with higher diversity at the reefs with low frequency noises. This preference was seen in the juvenile apogonids; however, juvenile gobiids were attracted to both high and low sound treatments equally, and juvenile stage Acanthuridae preferred the high frequency noises. This evidence that juvenile and adult reef fishes orientate with respect to the soundscape raises important issues for management, conservation and the protection of sound cues used in natural behaviour.
The effects of natural variation in the number of copies of the growth hormone (GH) gene on growth parameters, plasma GH profiles, and the response to GHRH challenge were compared in Coopworth ram ...lambs from selection lines differing in body composition and GH levels. Different genotypes at the GH locus carried two, three, or four copies of the GH gene and GH secretion was studied under ad libitum feeding conditions and in the fasted state. There were no significant effects of GH genotype on any parameters of growth or body composition. Basal serum GH concentration, GH pulse frequency, and GH pulse amplitude differed significantly with selection line and fasting, but did not differ significantly between the GH genotypes. Significant differences of subtle nature were found between the GH genotypes in their responsiveness to GHRH. For the ad libitum-fed Lean selection line animals, the first GHRH challenge resulted in a higher mean maximum response for
GH1
GH1
than
GH2
GH2
(
P < 0.05)
. Between the first and the second challenges there was a decrease in maximum response for the
GH1
GH1
genotype and an increase for the
GH2
GH2
genotype (P < 0.05 for GH genotype main effect). The differences between GH genotypes in response to GHRH challenge suggest that polymorphism in the number of GH gene copies in sheep may have physiological implications for the function of the GH axis, which may be manifested in growing lambs only under specific genotype-environment combinations.
Influenza A virus (IAV) causes up to half a million deaths worldwide annually, 90% of which occur in older adults. We show that IAV-infected monocytes from older humans have impaired antiviral ...interferon production but retain intact inflammasome responses. To understand the in vivo consequence, we used mice expressing a functional Mx gene encoding a major interferon-induced effector against IAV in humans. In Mx1-intact mice with weakened resistance due to deficiencies in Mavs and TIr7, we found an elevated respiratory bacterial burden. Notably, mortality in the absence of Mavs and TIr7 was independent of viral load or MyD88-dependent signaling but dependent on bacterial burden, caspase-1/11, and neutrophil-dependent tissue damage. Therefore, in the context of weakened antiviral resistance, vulnerability to IAV disease is a function of caspase-dependent pathology.
A report from a high-volume single center indicated a survival benefit of receiving a kidney transplant from an HLA-incompatible live donor as compared with remaining on the waiting list, whether or ...not a kidney from a deceased donor was received. The generalizability of that finding is unclear.
In a 22-center study, we estimated the survival benefit for 1025 recipients of kidney transplants from HLA-incompatible live donors who were matched with controls who remained on the waiting list or received a transplant from a deceased donor (waiting-list-or-transplant control group) and controls who remained on the waiting list but did not receive a transplant (waiting-list-only control group). We analyzed the data with and without patients from the highest-volume center in the study.
Recipients of kidney transplants from incompatible live donors had a higher survival rate than either control group at 1 year (95.0%, vs. 94.0% for the waiting-list-or-transplant control group and 89.6% for the waiting-list-only control group), 3 years (91.7% vs. 83.6% and 72.7%, respectively), 5 years (86.0% vs. 74.4% and 59.2%), and 8 years (76.5% vs. 62.9% and 43.9%) (P<0.001 for all comparisons with the two control groups). The survival benefit was significant at 8 years across all levels of donor-specific antibody: 89.2% for recipients of kidney transplants from incompatible live donors who had a positive Luminex assay for anti-HLA antibody but a negative flow-cytometric cross-match versus 65.0% for the waiting-list-or-transplant control group and 47.1% for the waiting-list-only control group; 76.3% for recipients with a positive flow-cytometric cross-match but a negative cytotoxic cross-match versus 63.3% and 43.0% in the two control groups, respectively; and 71.0% for recipients with a positive cytotoxic cross-match versus 61.5% and 43.7%, respectively. The findings did not change when patients from the highest-volume center were excluded.
This multicenter study validated single-center evidence that patients who received kidney transplants from HLA-incompatible live donors had a substantial survival benefit as compared with patients who did not undergo transplantation and those who waited for transplants from deceased donors. (Funded by the National Institute of Diabetes and Digestive and Kidney Diseases.).
Long-term care facilities are high-risk settings for severe outcomes from outbreaks of Covid-19, owing to both the advanced age and frequent chronic underlying health conditions of the residents and ...the movement of health care personnel among facilities in a region.
After identification on February 28, 2020, of a confirmed case of Covid-19 in a skilled nursing facility in King County, Washington, Public Health-Seattle and King County, aided by the Centers for Disease Control and Prevention, launched a case investigation, contact tracing, quarantine of exposed persons, isolation of confirmed and suspected cases, and on-site enhancement of infection prevention and control.
As of March 18, a total of 167 confirmed cases of Covid-19 affecting 101 residents, 50 health care personnel, and 16 visitors were found to be epidemiologically linked to the facility. Most cases among residents included respiratory illness consistent with Covid-19; however, in 7 residents no symptoms were documented. Hospitalization rates for facility residents, visitors, and staff were 54.5%, 50.0%, and 6.0%, respectively. The case fatality rate for residents was 33.7% (34 of 101). As of March 18, a total of 30 long-term care facilities with at least one confirmed case of Covid-19 had been identified in King County.
In the context of rapidly escalating Covid-19 outbreaks, proactive steps by long-term care facilities to identify and exclude potentially infected staff and visitors, actively monitor for potentially infected patients, and implement appropriate infection prevention and control measures are needed to prevent the introduction of Covid-19.
Abstract Background Recent descriptions of the clinical and laboratory features of subjects with acute porphyrias in the US are lacking. Our aim was to describe clinical, biochemical, and genetic ...features of 108 subjects. Methods Between September 2010 and December 2012, 108 subjects with acute porphyrias (90 acute intermittent porphyrias, 9 hereditary coproporphyrias, 9 variegate porphyrias) were enrolled into an observational study. Genetic testing was performed at a central genetic testing laboratory and clinical information entered into a central database. Selected features were compared with data for adults in the US. Results Most subjects (88/108, 81%) were female, with self-reported onset of symptoms in the second through fourth decades of life. The most common symptom was abdominal pain. Appendectomies and cholecystectomies were common before a diagnosis of porphyria. The diagnosis was delayed by a mean of 15 years. Anxiety and depression were common, and 18% complained of chronic symptoms, especially neuropathic and other pains. The incidences of systemic arterial hypertension, chronic kidney disease, seizure disorders, and psychiatric conditions were markedly increased. Mutations of the known causative genes were found in 102/105 of those tested, with novel mutations being found in 37, including in 7/8 subjects with hereditary coproporphyria. Therapy with intravenous hematin was the most effective therapy both for treatment of acute attacks and for prevention of recurrent attacks. Conclusions Acute porphyrias often remain undiagnosed for more than a decade after first symptoms develop. Intravenous hematin is the treatment of choice, both for treatment of acute attacks and for prevention of recurrent attacks.
The preferred mechanisms of racemization for three tris chelate complexes, Co(acac)3, Fe(phen)33+ and FeS2CN(CH2)43, were investigated by molecular modeling. The transition states for both a Bailar ...twist and a Rây-Dutt twist were considered; semi-empirical calculations (PM3) yielded activation energies. The preferred mechanism was the Bailar twist for Co(acac)3 and FeS2CN(CH2)43 with activation energies of 83.2 and 7.3 kcal mol−1, respectively, and the Rây-Dutt twist for Fe(phen)33+ with an activation energy of 114.4 kcal mol−1. These results are compared with those of geometrical models.
The preferred mechanisms of racemization for three tris chelate complexes, Co(acac)3, Fe(phen)33+ and FeS2CN(CH2)43, were investigated by molecular modeling. Semi-empirical calculations (PM3) yielded activation energies for the Bailar twist and Rây-Dutt twist transition states. The Bailar twist was preferred for Co(acac)3 and FeS2CN(CH2)43 while the Rây-Dutt twist was preferred for Fe(phen)33+. These results are compared with those of geometrical models.
We present an overview of the design and status of the
Polarbear
-2 and the Simons Array experiments.
Polarbear
-2 is a cosmic microwave background polarimetry experiment which aims to characterize ...the arc-minute angular scale B-mode signal from weak gravitational lensing and search for the degree angular scale B-mode signal from inflationary gravitational waves. The receiver has a 365 mm diameter focal plane cooled to 270 mK. The focal plane is filled with 7588 dichroic lenslet–antenna-coupled polarization sensitive transition edge sensor (TES) bolometric pixels that are sensitive to 95 and 150 GHz bands simultaneously. The TES bolometers are read-out by SQUIDs with 40 channel frequency domain multiplexing. Refractive optical elements are made with high-purity alumina to achieve high optical throughput. The receiver is designed to achieve noise equivalent temperature of 5.8
μ
K
CMB
s
in each frequency band.
Polarbear
-2 will deploy in 2016 in the Atacama desert in Chile. The Simons Array is a project to further increase sensitivity by deploying three
Polarbear
-2 type receivers. The Simons Array will cover 95, 150, and 220 GHz frequency bands for foreground control. The Simons Array will be able to constrain tensor-to-scalar ratio and sum of neutrino masses to
σ
(
r
)
=
6
×
10
-
3
at
r
=
0.1
and
∑
m
ν
(
σ
=
1
)
to 40 meV.
Objectives
The objective of this study was to investigate the quality of on‐plot piped water and rainwater at the point of consumption in an area with rapidly expanding coverage of ‘improved’ water ...sources.
Methods
Cross‐sectional study of 914 peri‐urban households in Kandal Province, Cambodia, between July–August 2011. We collected data from all households on water management, drinking water quality and factors potentially related to post‐collection water contamination. Drinking water samples were taken directly from a subsample of household taps (n = 143), stored tap water (n = 124), other stored water (n = 92) and treated stored water (n = 79) for basic water quality analysis for Escherichia coli and other parameters.
Results
Household drinking water management was complex, with different sources used at any given time and across seasons. Rainwater was the most commonly used drinking water source. Households mixed different water sources in storage containers, including ‘improved’ with ‘unimproved’ sources. Piped water from taps deteriorated during storage (P < 0.0005), from 520 cfu/100 ml (coefficient of variation, CV: 5.7) E. coli to 1100 cfu/100 ml (CV: 3.4). Stored non‐piped water (primarily rainwater) had a mean E. coli count of 1500 cfu/100 ml (CV: 4.1), not significantly different from stored piped water (P = 0.20). Microbial contamination of stored water was significantly associated with observed storage and handling practices, including dipping hands or receptacles in water (P < 0.005), and having an uncovered storage container (P = 0.052).
Conclusions
The microbial quality of ‘improved’ water sources in our study area was not maintained at the point of consumption, possibly due to a combination of mixing water sources at the household level, unsafe storage and handling practices, and inadequately treated piped‐to‐plot water. These results have implications for refining international targets for safe drinking water access as well as the assumptions underlying global burden of disease estimates, which posit that ‘improved’ sources pose minimal risks of diarrhoeal diseases.
Objectifs
Analyser la qualité de l'eau provenant de robinets dans les concessions et l'eau de pluie aux lieux de la consommation dans une zone en pleine expansion avec une couverture en sources d'eau «améliorées».
Méthodes
Une étude transversale portant sur 914 ménages périurbains dans la province de Kandal, au Cambodge, entre juillet et août 2011. Nous avons collecté des données dans tous les ménages sur la gestion de l'eau, la qualité de l'eau de boisson et les facteurs potentiellement liés à la contamination après la collecte de l'eau. Des échantillons d'eau potable ont été collectés directement à partir d'un sous‐échantillon de robinets des ménages (n = 143), de l'eau du robinet stockée (n = 124), de l'eau stockée provenant d'autres sources (n = 92) et de l'eau stockée traitée (n = 79), pour l'analyse de base de la qualité de l'eau pour E. coli et d'autres paramètres.
Résultats
La gestion de l'eau potable des ménages était complexe, avec différentes sources utilisées à chaque moment donné et selon les saisons. L'eau de pluie était la source d'eau potable la plus couramment utilisée. Les ménages mélangeaient de l'eau provenant de sources différentes dans des récipients de stockage, y compris celles de sources «améliorées» et «non améliorées». L'eau de robinets se détériorait au cours du stockage (p < 0,0005), avec des teneurs d’E. coli allant de 520 UFC/100 ml (coefficient de variation, CV: 5,7) à 1100 UFC/100 ml (CV: 3,4). L'eau stockée provenant de sources autres que le robinet (principalement l'eau de pluie) avait une teneur moyenne en E. coli de 1500 UFC/100 ml (CV: 4,1), ce qui n'est pas significativement différent de l'eau de robinet stockée (p = 0,20). La contamination microbienne de l'eau stockée était significativement associée aux pratiques de stockage et de manipulation observées, y compris le trempage des mains ou des récipients dans l'eau (p < 0,005) et le fait d'avoir un récipient de stockage non couvert (p = 0,052).
Conclusions
La qualité microbienne des sources d'eau «améliorées» dans notre zone d’étude n’était pas maintenue aux lieux de consommation, probablement en raison d'une combinaison du mélange d'eau de sources différentes à l’échelle des ménages, des pratiques de stockage et de manipulation à risque et d'eau de robinet des ménages inadéquatement traitée. Ces résultats ont des implications pour affiner les objectifs internationaux pour l'accès à l'eau potable ainsi que les hypothèses sous‐jacentes sur les estimations de la charge mondiale des maladies, qui postulent que les sources «améliorées» posent des risques réduits de maladies diarrhéiques.
Objetivos
Investigar la calidad del agua corriente y del agua de lluvia en el punto de consumo, en un área con una cobertura en rápida expansión de fuentes de agua “mejoradas.”
Métodos
Estudio croseccional de 914 hogares periurbanos en la Provincia de Kandal, Camboya, entre Julio‐Agosto 2011. Hemos recogido datos de todos los hogares sobre el manejo del agua, la calidad del agua potable, y los factores potencialmente relacionados con la contaminación del agua después de su recolección. Las muestras del agua potable se tomaron directamente de una submuestra de grifos de los hogares (n = 143), de agua corriente almacenada (n = 124), de otra agua almacenada (n = 92) y de agua almacenada tratada (n = 79) y se les realizaron análisis básicos para E. coli y otros parámetros.
Resultados
El manejo del agua potable de los hogares era complejo, con varias fuentes utilizadas al mismo tiempo y a lo largo de las estaciones. El agua de lluvia era la más comúnmente utilizada como fuente de agua para beber. Los hogares mezclaban en contenedores, para su almacenaje, agua proveniente de diferentes fuentes, incluyendo agua de fuentes “mejoradas” con agua de fuentes “sin mejorar”. El agua corriente obtenida a través de los grifos se deterioraba durante su almacenaje (p < 0.0005), de 520 ufc/100 ml (coeficiente de variación, CV: 5.7) E. coli a 1100 UFC/100 ml (CV: 3.4). El agua almacenada que no provenía de la red de tuberías (principalmente agua de lluvia), tenía un conteo de E. coli de 1500 ufc/100 ml (CV: 4.1), nada significativamente diferente al agua corriente almacenada (p = 0.20). La contaminación microbiana del agua almacenada estaba significativamente asociada con las prácticas de almacenamiento y de manejo observadas, que incluían el introducir las manos o recipientes dentro del agua (p < 0.005), o mantener el contenedor de almacenaje sin cubrir (p = 0.052).
Conclusiones
La calidad microbiana de las fuentes de agua “mejoradas” en nuestra área de estudio no se mantenía en el lugar de consumo, posiblemente debido a una combinación de mezclar las fuentes de agua dentro del hogar, un almacenamiento y prácticas de manejo poco seguras, y un agua corriente tratada inadecuadamente. Estos resultados tienen implicaciones a la hora de refinar los objetivos internacionales para el acceso a agua potable segura, al igual que los supuestos sobre los que se basan los cálculos de la carga global de la enfermedad diarreica, que postulan que las fuentes “mejoradas” de agua plantean un riesgo mínimo para la enfermedad diarreica.