Background
SARS‐CoV‐2 coronavirus infection ranges from asymptomatic through to fatal COVID‐19 characterized by a ‘cytokine storm’ and lung failure. Vitamin D deficiency has been postulated as a ...determinant of severity.
Objectives
To review the evidence relevant to vitamin D and COVID‐19.
Methods
Narrative review.
Results
Regression modelling shows that more northerly countries in the Northern Hemisphere are currently (May 2020) showing relatively high COVID‐19 mortality, with an estimated 4.4% increase in mortality for each 1 degree latitude north of 28 degrees North (P = 0.031) after adjustment for age of population. This supports a role for ultraviolet B acting via vitamin D synthesis. Factors associated with worse COVID‐19 prognosis include old age, ethnicity, male sex, obesity, diabetes and hypertension and these also associate with deficiency of vitamin D or its response. Vitamin D deficiency is also linked to severity of childhood respiratory illness. Experimentally, vitamin D increases the ratio of angiotensin‐converting enzyme 2 (ACE2) to ACE, thus increasing angiotensin II hydrolysis and reducing subsequent inflammatory cytokine response to pathogens and lung injury.
Conclusions
Substantial evidence supports a link between vitamin D deficiency and COVID‐19 severity but it is all indirect. Community‐based placebo‐controlled trials of vitamin D supplementation may be difficult. Further evidence could come from study of COVID‐19 outcomes in large cohorts with information on prescribing data for vitamin D supplementation or assay of serum unbound 25(OH) vitamin D levels. Meanwhile, vitamin D supplementation should be strongly advised for people likely to be deficient.
Background Recent research has indicated that vitamin D may have immune supporting properties through modulation of both the adaptive and innate immune system through cytokines and regulation of cell ...signalling pathways. We hypothesize that vitamin D status may influence the severity of responses to Covid-19 and that the prevalence of vitamin D deficiency in Europe will be closely aligned to Covid-19 mortality. Methods We conducted a literature search on PubMed (no language restriction) of vitamin D status (for older adults) in countries/areas of Europe affected by Covid-19 infection. Countries were selected by severity of infection (high and low) and were limited to national surveys or where not available, to geographic areas within the country affected by infection. Covid-19 infection and mortality data was gathered from the World Health Organisation. Results Counter-intuitively, lower latitude and typically 'sunny' countries such as Spain and Italy (particularly Northern Italy), had low mean concentrations of 25(OH)D and high rates of vitamin D deficiency. These countries have also been experiencing the highest infection and death rates in Europe. The northern latitude countries (Norway, Finland, Sweden) which receive less UVB sunlight than Southern Europe, actually had much higher mean 25(OH)D concentrations, low levels of deficiency and for Norway and Finland, lower infection and death rates. The correlation between 25(OH)D concentration and mortality rate reached conventional significance (P=0.046) by Spearman's Rank Correlation. Conclusions Optimising vitamin D status to recommendations by national and international public health agencies will certainly have benefits for bone health and potential benefits for Covid-19. There is a strong plausible biological hypothesis and evolving epidemiological data supporting a role for vitamin D in Covid-19.
We demonstrated coherent control of a quantum two-level system based on two-electron spin states in a double quantum dot, allowing state preparation, coherent manipulation, and projective readout. ...These techniques are based on rapid electrical control of the exchange interaction. Separating and later recombining a singlet spin state provided a measurement of the spin dephasing time, T₂*, of approximately10 nanoseconds, limited by hyperfine interactions with the gallium arsenide host nuclei. Rabi oscillations of two-electron spin states were demonstrated, and spin-echo pulse sequences were used to suppress hyperfine-induced dephasing. Using these quantum control techniques, a coherence time for two-electron spin states exceeding 1 microsecond was observed.
All clocks, in some form or another, use the evolution of nature toward higher entropy states to quantify the passage of time. Because of the statistical nature of the second law and corresponding ...entropy flows, fluctuations fundamentally limit the performance of any clock. This suggests a deep relation between the increase in entropy and the quality of clock ticks. Indeed, minimal models for autonomous clocks in the quantum realm revealed that a linear relation can be derived, where for a limited regime every bit of entropy linearly increases the accuracy of quantum clocks. But can such a linear relation persist as we move toward a more classical system? We answer this in the affirmative by presenting the first experimental investigation of this thermodynamic relation in a nanoscale clock. We stochastically drive a nanometer-thick membrane and read out its displacement with a radio-frequency cavity, allowing us to identify the ticks of a clock. We show theoretically that the maximum possible accuracy for this classical clock is proportional to the entropy created per tick, similar to the known limit for a weakly coupled quantum clock but with a different proportionality constant. We measure both the accuracy and the entropy. Once nonthermal noise is accounted for, we find that there is a linear relation between accuracy and entropy and that the clock operates within an order of magnitude of the theoretical bound.
Although electron spins in III-V semiconductor quantum dots have shown great promise as qubits, hyperfine decoherence remains a major challenge in these materials. Group IV semiconductors possess ...dominant nuclear species that are spinless, allowing qubit coherence times up to 2 s. In carbon nanotubes, where the spin-orbit interaction allows for all-electrical qubit manipulation, theoretical predictions of the coherence time vary by at least six orders of magnitude and range up to 10 s or more. Here, we realize a qubit encoded in two nanotube valley-spin states, with coherent manipulation via electrically driven spin resonance mediated by a bend in the nanotube. Readout uses Pauli blockade leakage current through a double quantum dot. Arbitrary qubit rotations are demonstrated and the coherence time is measured for the first time via Hahn echo, allowing comparison with theoretical predictions. The coherence time is found to be ∼65 ns, probably limited by electrical noise. This shows that, even with low nuclear spin abundance, coherence can be strongly degraded if the qubit states are coupled to electric fields.
We introduce the “displacemon” electromechanical architecture that comprises a vibrating nanobeam, e.g., a carbon nanotube, flux coupled to a superconducting qubit. This platform can achieve strong ...and even ultrastrong coupling, enabling a variety of quantum protocols. We use this system to describe a protocol for generating and measuring quantum interference between trajectories of a nanomechanical resonator. The scheme uses a sequence of qubit manipulations and measurements to cool the resonator, to apply two effective diffraction gratings, and then to measure the resulting interference pattern. We demonstrate the feasibility of generating a spatially distinct quantum superposition state of motion containing more than106nucleons using a vibrating nanotube acting as a junction in this new superconducting qubit configuration.
We present new observational determinations of the evolution of the 2–10 keV X-ray luminosity function (XLF) of active galactic nuclei (AGN). We utilize data from a number of surveys including both ...the 2 Ms Chandra Deep Fields and the AEGIS-X 200 ks survey, enabling accurate measurements of the evolution of the faint end of the XLF. We combine direct, hard X-ray selection and spectroscopic follow-up or photometric redshift estimates at z < 1.2 with a rest-frame UV colour pre-selection approach at higher redshifts to avoid biases associated with catastrophic failure of the photometric redshifts. Only robust optical counterparts to X-ray sources are considered using a likelihood ratio matching technique. A Bayesian methodology is developed that considers redshift probability distributions, incorporates selection functions for our high-redshift samples and allows robust comparison of different evolutionary models. We statistically account for X-ray sources without optical counterparts to correct for incompleteness in our samples. We also account for Poissonian effects on the X-ray flux estimates and sensitivities and thus correct for the Eddington bias. We find that the XLF retains the same shape at all redshifts, but undergoes strong luminosity evolution out to z∼ 1, and an overall negative density evolution with increasing redshift, which thus dominates the evolution at earlier times. We do not find evidence that a luminosity-dependent density evolution, and the associated flattening of the faint-end slope, is required to describe the evolution of the XLF. We find significantly higher space densities of low-luminosity, high-redshift AGN than in prior studies, and a smaller shift in the peak of the number density to lower redshifts with decreasing luminosity. The total luminosity density of AGN peaks at z= 1.2 ± 0.1, but there is a mild decline to higher redshifts. We find that >50 per cent of black hole growth takes place at z > 1, with around half in LX < 1044 erg s−1 AGN.
We present a new method for determining the sensitivity of X-ray imaging observations, which correctly accounts for the observational biases that affect the probability of detecting a source of a ...given X-ray flux, without the need to perform a large number of time-consuming simulations. We use this new technique to estimate the X-ray source counts in different spectral bands (0.5–2, 0.5–10, 2–10 and 5–10 keV) by combining deep pencil-beam and shallow wide-area Chandra observations. The sample has a total of 6295 unique sources over an area of 11.8 deg2 and is the largest used to date to determine the X-ray number counts. We determine, for the first time, the break flux in the 5–10 keV band, in the case of a double power-law source count distribution. We also find an upturn in the 0.5–2 keV counts at fluxes below about 6 × 10−17erg s−1cm−2. We show that this can be explained by the emergence of normal star-forming galaxies which dominate the X-ray population at faint fluxes. The fraction of the diffuse X-ray background resolved into point sources at different spectral bands is also estimated. It is argued that a single population of Compton thick active galactic nuclei (AGN) cannot be responsible for the entire unresolved X-ray background in the energy range 2–10 keV.
The Common Model of Cognition (CMC) is a recently proposed, consensus architecture intended to capture decades of progress in cognitive science on modeling human and human-like intelligence. Because ...of the broad agreement around it and preliminary mappings of its components to specific brain areas, we hypothesized that the CMC could be a candidate model of the large-scale functional architecture of the human brain. To test this hypothesis, we analyzed functional MRI data from 200 participants and seven different tasks that cover a broad range of cognitive domains. The CMC components were identified with functionally homologous brain regions through canonical fMRI analysis, and their communication pathways were translated into predicted patterns of effective connectivity between regions. The resulting dynamic linear model was implemented and fitted using Dynamic Causal Modeling, and compared against six alternative brain architectures that had been previously proposed in the field of neuroscience (three hierarchical architectures and three hub-and-spoke architectures) using a Bayesian approach. The results show that, in all cases, the CMC vastly outperforms all other architectures, both within each domain and across all tasks. These findings suggest that a common set of architectural principles that could be used for artificial intelligence also underpins human brain function across multiple cognitive domains.