Quantitative susceptibility mapping allows overcoming several nonlocal restrictions of susceptibility-weighted and phase imaging and enables quantification of magnetic susceptibility. We compared the ...diagnostic accuracy of quantitative susceptibility mapping and R2* (1/T2*) mapping to discriminate between patients with Parkinson disease and controls.
For 21 patients with Parkinson disease and 21 age- and sex-matched controls, 2 radiologists measured the quantitative susceptibility mapping values and R2* values in 6 brain structures (the thalamus, putamen, caudate nucleus, pallidum, substantia nigra, and red nucleus).
The quantitative susceptibility mapping values and R2* values of the substantia nigra were significantly higher in patients with Parkinson disease (P < .01); measurements in other brain regions did not differ significantly between patients and controls. For the discrimination of patients with Parkinson disease from controls, receiver operating characteristic analysis suggested that the optimal cutoff values for the substantia nigra, based on the Youden Index, were >0.210 for quantitative susceptibility mapping and >28.8 for R2*. The sensitivity, specificity, and accuracy of quantitative susceptibility mapping were 90% (19 of 21), 86% (18 of 21), and 88% (37 of 42), respectively; for R2* mapping, they were 81% (17 of 21), 52% (11 of 21), and 67% (28 of 42). Pair-wise comparisons showed that the areas under the receiver operating characteristic curves were significantly larger for quantitative susceptibility mapping than for R2* mapping (0.91 versus 0.69, P < .05).
Quantitative susceptibility mapping showed higher diagnostic performance than R2* mapping for the discrimination between patients with Parkinson disease and controls.
OBJECTIVES: This study aimed to clarify the impact of the coronavirus disease 2019 outbreak on the levels of activity among older patients with frailty or underlying diseases. A total of 175 patients ...(79.0±7.0 years) undergoing outpatient or home-based rehabilitation, stratified into groups, based on frailty status. The percentage of patients who went out at least once a week decreased after the outbreak from 91% to 87%, from 65% to 46%, and from 47% to 36% in the non-frail, frail, and nursing care requirement groups, respectively. The proportion of older patients participating in exercise during the outbreak was 75%, 51%, and 41% in the non-frail, frail, and nursing care requirement groups, respectively. The proportion of older patients participating in voluntary exercise after instruction was lowest in the frail group (35%). Older patients with frailty are susceptible to the negative effects of refraining from physical activity and require careful management.
Summary
Background
Myositis‐specific autoantibodies (MSAs) are associated with unique clinical subsets in polymyositis/dermatomyositis (PM/DM). Autoantibodies against transcriptional intermediary ...factor (TIF)‐1γ and TIF‐1α are known to be MSAs. Previously, we reported that TIF‐1β is also targeted in patients with DM with or without concomitant anti‐TIF‐1α/γ antibodies.
Objectives
To evaluate the clinical features of seven cases with anti‐TIF‐1β antibodies alone.
Methods
Serum autoantibody profiles were determined, and protein and RNA immunoprecipitation studies were conducted. Western blotting was performed to confirm autoantibody reactivity against TIF‐1β.
Results
Anti‐TIF‐1β antibody was identified by immunoprecipitation assay in 24 cases. Among them, seven patients were positive for anti‐TIF‐1β antibody alone. Six of the seven patients were classified as having DM. Among the six cases of DM, two patients had no muscle weakness and normal creatine kinase (CK) levels, and were classified as having clinically amyopathic DM. Four patients had muscle weakness, but three of them had normal serum CK levels that responded well to systemic steroids. Characteristic features of DM included skin rashes, such as Gottron sign, periungual erythema, punctate haemorrhage on the perionychium and facial erythema including heliotrope, which were observed in 86%, 57%, 86% and 71% of our cases, respectively. One of the seven patients had appendiceal cancer. None of the patients had interstitial lung disease.
Conclusions
Seven patients were confirmed to have anti‐TIF‐1β antibody without any other MSAs, including TIF‐1α/γ antibodies, and six of them were diagnosed with DM. We suggest that anti‐TIF‐1β antibody is an MSA, and that it is associated with clinically amyopathic DM or DM with mild myopathy.
What's already known about this topic?
Previously we reported that transcriptional intermediary factor (TIF)‐1β is also targeted in patients with dermatomyositis with or without concomitant anti‐TIF‐1α/γ antibodies.
Anti‐TIF‐1β antibody could be a myositis‐specific autoantibody.
What does this study add?
We describe seven patients with anti‐TIF‐1β antibody but without anti‐TIF‐1α/γ reactivity.
Anti‐TIF‐1β antibody is possibly associated with clinically amyopathic dermatomyositis or dermatomyositis with mild myopathy.
Linked Comment: Fiorentino. Br J Dermatol 2019; 180:709–710.
Respond to this article
Plain language summary available online
This paper summarises the operational experience and improvements of the ATLAS hierarchical multi-tier computing infrastructure in the past year leading to taking and processing of the first ...collisions in 2009 and 2010. Special focus will be given to the Tier-0 which is responsible, among other things, for a prompt processing of the raw data coming from the online DAQ system and is thus a critical part of the chain. We will give an overview of the Tier-0 architecture, and improvements based on the operational experience. Emphasis will be put on the new developments, namely the Task Management System opening Tier-0 to expert users and Web 2.0 monitoring and management suite. We then overview the achieved performances with the distributed computing system, discuss observed data access patterns over the grid and describe how we used this information to improve analysis rates.
In this paper, we demonstrated an actively Q-switched, radially polarized, and laser-diode end-pumped Nd:YAG laser with an acousto-optic modulator as the Q switch and a photonic crystal grating as ...the output coupler. The laser generated pulses of 26.4–67.2 ns duration, and the repetition rate can be continuously adjusted from 500 Hz to 9.238 kHz with peak power up to 7.75 kW. Such a radially polarized pulse would facilitate numerous applications.
We report on laser-diode pumped low-threshold, and compact passively Q-switched Yb:YAG microchip lasers, with Cr4+:YAG crystals as the saturable absorbers. The laser threshold at the fundamental ...wavelength of 1.03 Delta *mm is as low as 0.25 W, and the slope efficiency is as high as 36.8%, and the optical-to-optical efficiency is as high as 27% for the 95% initial transmission of the Cr4+:YAG crystal. A pulse width of 1.35 ns and peak power of over 8.2 kW was obtained. Using a 5 mm thick KTP crystal as the second-harmonic generation medium, 514.7 nm green light of 155 mW power was generated. The pulse duration of 480 ps was generated at 1.03 Delta *mm by using 85% of the initial transmission of the Cr4+:YAG saturable absorber. Stable single-longitudinal-mode oscillation and wide-separated multi-longitudinal-mode oscillation due to the etalon effect of the Cr4+:YAG thin plate was achieved at different pump power levels.
The Belle II experiment1 employs the root file format2 for recording data and is investigating the use of "index-files" to reduce the size of data skims. These files contain pointers to the location ...of interesting events within the total Belle II data set and reduce the size of data skims by 2 orders of magnitude. We implement this scheme on the Belle II grid by recording the parent file metadata and the event location within the parent file. While the scheme works, it is substantially slower than a normal sequential read of standard skim files using default root file parameters. We investigate the performance of the scheme by adjusting the "splitLevel" and "autoflushsize" parameters of the root files in the parent data files.
The International Conference on Computing in High Energy and Nuclear Physics (CHEP) is a major series of international conferences intended to attract physicists and computing professionals to ...discuss on recent developments and trends in software and computing for their research communities. Experts from the high energy and nuclear physics, computer science, and information technology communities attend CHEP events. This conference series provides an international forum to exchange experiences and the needs of a wide community, and to present and discuss recent, ongoing, and future activities. At the beginning of the successful series of CHEP conferences in 1985, the latest developments in embedded systems, networking, vector and parallel processing were presented in Amsterdam. The software and computing ecosystem massively evolved since then, and along this path each CHEP event has marked a step further. A vibrant community of experts on a wide range of different high-energy and nuclear physics experiments, as well as technology explorer and industry contacts, attend and discuss the present and future challenges, and shape the future of an entire community. In such a rapidly evolving area, aiming to capture the state-of-the-art on software and computing through a collection of proceedings papers on a journal is a big challenge. Due to the large attendance, the final papers appear on the journal a few months after the conference is over. Additionally, the contributions often report about studies at very heterogeneous statuses, namely studies that are completed, or are just started, or yet to be done. It is not uncommon that by the time a specific paper appears on the journal some of the work is over a year old, or the investigation actually happened in different directions and with different methodologies than originally presented at the conference just a few months before. And by the time the proceedings appear in journal form, new ideas and explorations have quickly formed, have already started, and presumably have also followed previously unpredictable directions. In this scenario, it is normal and healthy for the entire community to question itself as of whether it is a set of proceedings the best way to document and communicate to peers (present and future) the work that has been done at a precise time and the vivid and live ideas of a precise moment in the evolution of the discipline. Pointing the attention to a specific CHEP event alone does not give the right answer: in fact, the heritage value lies in the quality and continuity of the documentation work, despite the changes of times, trends and actors. The CHEP proceedings, in their variety and thanks to the condensed form of knowledge they offer, are what most likely will be more easily preserved for future generations, thanks to the outstanding efforts over digital libraries for all kinds of cultural heritage. Since 1985, this long-standing tradition continued with the 21st CHEP edition in Okinawa. The successful model that brings together high-energy and nuclear physicist and computer scientists was repeated in the Okinawa prefecture, an outstanding location consisting of a few dozen small islands in the southern half of the Nansei Shoto, the island chain which stretches over about one thousand kilometres from Kyushu to Taiwan. The OIST (Okinawa Institute of Science and Technology) centre hosted the event, and offered an outstanding location and efficient facilities for the event. As for the CHEP history, contributions from 'general purpose' physics experiments mixed together with highly specialized work on the frontier of precision and intensity. The year 2015 is marked by the LHC restart in Run 2. Experimental groups at the LHC reviewed and presented their Run 1 experiences in detail, and reported the work done in acquiring the latest computing and software technologies, as well as in evolving their computing models in preparation for Run 2 (and beyond). On the side of the intensity frontier, 2015 is also the start of Super-KEKB commissioning. Fixed-target experiments at CERN, Fermilab and J-PARC are growing bigger in size. In the field of nuclear physics, FAIR is under construction and RHIC well engaged into its Phase-II research program facing increased datasets and new challenges with precision physics. For the future, developments are progressing towards the construction of ILC. In all these projects, computing and software will be even more important than before. Beyond those examples, non-accelerator experiments reported on their search for novel computing models as their apparatus and operation become larger and more distributed. The CHEP edition in Okinawa explored the synergy of HEP experimental physicists and computer scientists with data engineers and data scientists even further. Many area of research are covered, and the techniques developed and adopted are presented in a richness and diversity never seen before. In numbers, CHEP 2015 attracted a very high number of oral and poster contribution, 535 in total, and hosted 450 participants from 28 countries. For the first time in the conference history, a system of 'keywords' at the abstracts submission time was set up and exploited to produce conference tracks depending on the topics covered in the proposed contributions. Authors were asked to select some 'application keywords' and/or 'technology keywords' to specify the content of their contribution. A bottom-up approach that was tried at CHEP 2015 in Okinawa for the first time in the history of this conference series, this encountered vast satisfaction both in the International Advisory Committee and among the conference attendees. This process created 8 topical tracks, well balanced in content, manageable in terms of number of contributions, and able to create the adequate discussion space for trend topics (e.g. cloud computing and virtualization). CHEP 2015 hosted contributions on online computing; offline software; data store and access; middleware, software development and tools, experiment frameworks, tools for distributed computing; computing activities and computing models; facilities, infrastructure, network; clouds and virtualization; performance increase and optimization exploiting hardware features. Throughout the entire process, we were blessed with a forward-looking group of competent colleagues in our International Advisory Committee, whom we warmly thank. All the individuals in the Program Committee team, who put together the technical tracks of the conference and reviewed all papers to prepare the sections of this proceedings journal, have to be credited for their outstanding work. And of course the gratitude goes to all people who submitted a contribution, presented it, and spent time to prepare a careful paper to document the work. These people, in the first place, are the main authors of the big success that CHEP continues to be. After almost 30 years, and 21 CHEP editions, this conference cycle continues to stay strong and to evolve in rapidly changing times towards a challenging future, covering new grounds and intercepting new trends as our field of research evolves. The next stop in this journey will be at the 22nd CHEP Conference on October 12th-14th, in San Francisco, hosted by SLAC and LBNL.
Summary
Diabetes is associated with a marked increase in the risk of atherosclerotic vascular disorders, including coronary, cerebrovascular and peripheral artery disease. Cardiovascular disease ...(CVD) could account for disabilities and high mortality rates in patients with diabetes. Conventional risk factors, including hyperlipidaemia, hypertension, smoking, obesity, lack of exercise and a positive family history, contribute similarly to macrovascular complications in type 2 diabetic patients and non‐diabetic subjects. The levels of these factors in diabetic patients are certainly increased, but not enough to explain the exaggerated risk for macrovascular complications in diabetic population. Therefore, specific diabetes‐related risk factors should be involved in the excess risk in diabetic patients. In this paper, we review the molecular mechanisms for accelerated atherosclerosis in diabetes, especially focusing on postprandial hyperglycaemia. We also discuss here the potential therapeutic strategy that specifically targets CVD in patients with diabetes.