The Vera C. Rubin Observatory is preparing to execute the most ambitious astronomical survey ever attempted, the Legacy Survey of Space and Time (LSST). Currently the final phase of construction is ...under way in the Chilean Andes, with the Observatory’s ten-year science mission scheduled to begin in 2025. Rubin’s 8.4-meter telescope will nightly scan the southern hemisphere collecting imagery in the wavelength range 320–1050 nm covering the entire observable sky every 4 nights using a 3.2 gigapixel camera, the largest imaging device ever built for astronomy. Automated detection and classification of celestial objects will be performed by sophisticated algorithms on high-resolution images to progressively produce an astronomical catalog eventually composed of 20 billion galaxies and 17 billion stars and their associated physical properties. In this article we present an overview of the system currently being constructed to perform data distribution as well as the annual campaigns which reprocess the entire image dataset collected since the beginning of the survey. These processing campaigns will utilize computing and storage resources provided by three Rubin data facilities (one in the US and two in Europe). Each year a Data Release will be produced and disseminated to science collaborations for use in studies comprising four main science pillars: probing dark matter and dark energy, taking inventory of solar system objects, exploring the transient optical sky and mapping the Milky Way. Also presented is the method by which we leverage some of the common tools and best practices used for management of large-scale distributed data processing projects in the high energy physics and astronomy communities. We also demonstrate how these tools and practices are utilized within the Rubin project in order to overcome the specific challenges faced by the Observatory.
This paper examines formations of the 'self' as a wise leader at both personal and professional levels. Drawing on Aristotle and theories of reflective and experiential learning, the possibility of ...cultivating wisdom through an experience of a leadership course, provided at the University of Melbourne for PhD candidates, is explored. Applying an autoethnographic methodology, a reflexive account of a doctoral student who experienced this programme is narrated to explore the (re)construction of a wiser 'self'. A synthesis of multiple points of personal and collective judgements, decision-making and performative actions leads to promising implications for future research, and for deeper pedagogical practices in leadership programmes.
•Textural features on daily images outperformed dose/volume parameters.•The best models could be used before or at mid-treatment for personalisation.•Area under the curve of the best models at 6, 12 ...and 24 months was 0.69, 0.74 and 0.86.
The images acquired during radiotherapy for image-guidance purposes could be used to monitor patient-specific response to irradiation and improve treatment personalisation. We investigated whether the kinetics of radiomics features from daily mega-voltage CT image-guidance scans (MVCT) improve prediction of moderate-to-severe xerostomia compared to dose/volume parameters in radiotherapy of head-and-neck cancer (HNC).
All included HNC patients (N = 117) received 30 or more fractions of radiotherapy with daily MVCTs. Radiomics features were calculated on the contra-lateral parotid glands of daily MVCTs. Their variations over time after each complete week of treatment were used to predict moderate-to-severe xerostomia (CTCAEv4.03 grade ≥ 2) at 6, 12 and 24 months post-radiotherapy. After dimensionality reduction, backward/forward selection was used to generate combinations of predictors.
Three types of logistic regression model were generated for each follow-up time: 1) a pre-treatment reference model using dose/volume parameters, 2) a combination of dose/volume and radiomics-based predictors, and 3) radiomics-based predictors. The models were internally validated by cross-validation and bootstrapping and their performance evaluated using Area Under the Curve (AUC) on separate training and testing sets.
Moderate-to-severe xerostomia was reported by 46 %, 33 % and 26 % of the patients at 6, 12 and 24 months respectively. The selected models using radiomics-based features extracted at or before mid-treatment outperformed the dose-based models with an AUCtrain/AUCtest of 0.70/0.69, 0.76/0.74, 0.86/0.86 at 6, 12 and 24 months, respectively.
Our results suggest that radiomics features calculated on MVCTs from the first half of the radiotherapy course improve prediction of moderate-to-severe xerostomia in HNC patients compared to a dose-based pre-treatment model.
While core to the scientific approach, reproducibility of experimental results is challenging in radiomics studies. A recent publication identified radiomics features that are predictive of late ...irradiation-induced toxicity in head and neck cancer (HNC) patients. In this study, we assessed the generalisability of these findings.
The procedure described in the publication in question was applied to a cohort of 109 HNC patients treated with 50–70 Gy in 20–35 fractions using helical radiotherapy although there were inherent differences between the two patient populations and methodologies. On each slice of the planning CT with delineated parotid and submandibular glands, the imaging features that were previously identified as predictive of moderate-to-severe xerostomia and sticky saliva 12 months post radiotherapy (Xer12m and SS12m) were calculated. Specifically, Short Run Emphasis (SRE) and maximum CT intensity (maxHU) were evaluated for improvement in prediction of Xer12m and SS12m respectively, compared to models solely using baseline toxicity and mean dose to the salivary glands.
None of the associations previously identified as statistically significant and involving radiomics features in univariate or multivariate models could be reproduced on our cohort.
The discrepancies observed between the results of the two studies delineate limits to the generalisability of the previously reported findings. This may be explained by the differences in the approaches, in particular the imaging characteristics and subsequent methodological implementation. This highlights the importance of external validation, high quality reporting guidelines and standardisation protocols to ensure generalisability, replication and ultimately clinical implementation.
The LSST DESC DC2 Simulated Sky Survey Abolfathi, Bela; Alonso, David; Armstrong, Robert ...
The Astrophysical journal. Supplement series,
03/2021, Letnik:
253, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Abstract
We describe the simulated sky survey underlying the second data challenge (DC2) carried out in preparation for analysis of the Vera C. Rubin Observatory Legacy Survey of Space and Time ...(LSST) by the LSST Dark Energy Science Collaboration (LSST DESC). Significant connections across multiple science domains will be a hallmark of LSST; the DC2 program represents a unique modeling effort that stresses this interconnectivity in a way that has not been attempted before. This effort encompasses a full end-to-end approach: starting from a large
N
-body simulation, through setting up LSST-like observations including realistic cadences, through image simulations, and finally processing with Rubin’s LSST Science Pipelines. This last step ensures that we generate data products resembling those to be delivered by the Rubin Observatory as closely as is currently possible. The simulated DC2 sky survey covers six optical bands in a wide-fast-deep area of approximately 300 deg
2
, as well as a deep drilling field of approximately 1 deg
2
. We simulate 5 yr of the planned 10 yr survey. The DC2 sky survey has multiple purposes. First, the LSST DESC working groups can use the data set to develop a range of DESC analysis pipelines to prepare for the advent of actual data. Second, it serves as a realistic test bed for the image processing software under development for LSST by the Rubin Observatory. In particular, simulated data provide a controlled way to investigate certain image-level systematic effects. Finally, the DC2 sky survey enables the exploration of new scientific ideas in both static and time domain cosmology.
The irradiation of sub-regions of the parotid has been linked to xerostomia development in patients with head and neck cancer (HNC). In this study, we compared the xerostomia classification ...performance of radiomics features calculated on clinically relevant and de novo sub-regions of the parotid glands of HNC patients.
All patients (N = 117) were treated with TomoTherapy in 30-35 fractions of 2-2.167 Gy per fraction with daily mega-voltage-CT (MVCT) acquisition for image-guidance purposes. Radiomics features (N = 123) were extracted from daily MVCTs for the whole parotid gland and nine sub-regions. The changes in feature values after each complete week of treatment were considered as predictors of xerostomia (CTCAEv4.03, grade ≥ 2) at 6 and 12 months. Combinations of predictors were generated following the removal of statistically redundant information and stepwise selection. The classification performance of the logistic regression models was evaluated on train and test sets of patients using the Area Under the Curve (AUC) associated with the different sub-regions at each week of treatment and benchmarked with the performance of models solely using dose and toxicity at baseline.
In this study, radiomics-based models predicted xerostomia better than standard clinical predictors. Models combining dose to the parotid and xerostomia scores at baseline yielded an AUC
test
of 0.63 and 0.61 for xerostomia prediction at 6 and 12 months after radiotherapy while models based on radiomics features extracted from the whole parotid yielded a maximum AUC
test
of 0.67 and 0.75, respectively. Overall, across sub-regions, maximum AUC
test
was 0.76 and 0.80 for xerostomia prediction at 6 and 12 months. Within the first two weeks of treatment, the cranial part of the parotid systematically yielded the highest AUC
test
.
Our results indicate that variations of radiomics features calculated on sub-regions of the parotid glands can lead to earlier and improved prediction of xerostomia in HNC patients.
The FireGrid project aims to harness the potential of advanced forms of computation to support the response to large-scale emergencies (with an initial focus on the response to fires in the built ...environment). Computational models of physical phenomena are developed, and then deployed and computed on High Performance Computing resources to infer incident conditions by assimilating live sensor data from an emergency in real time—or, in the case of predictive models, faster-than-real time. The results of these models are then interpreted by a knowledge-based reasoning scheme to provide decision support information in appropriate terms for the emergency responder. These models are accessed over a Grid from an agent-based system, of which the human responders form an integral part. This paper proposes a novel FireGrid architecture, and describes the rationale behind this architecture and the research results of its application to a large-scale fire experiment.
► Demonstration of infrastructure for urgent emergency response decision support. ► A simulation model infers incident state that is interpreted by knowledge reasoning. ► Dense sensor networks provide live data for steering simulations in real time. ► The integration of Grid and HPC provides requisite computational power. ► AI techniques rationalize and present complex simulation results in a concise manner.
The Vera C. Rubin Observatory is preparing to execute the most ambitious astronomical survey ever attempted, the Legacy Survey of Space and Time (LSST). Currently the final phase of construction is ...under way in the Chilean Andes, with the Observatory's ten-year science mission scheduled to begin in 2025. Rubin's 8.4-meter telescope will nightly scan the southern hemisphere collecting imagery in the wavelength range 320-1050 nm covering the entire observable sky every 4 nights using a 3.2 gigapixel camera, the largest imaging device ever built for astronomy. Automated detection and classification of celestial objects will be performed by sophisticated algorithms on high-resolution images to progressively produce an astronomical catalog eventually composed of 20 billion galaxies and 17 billion stars and their associated physical properties. In this article we present an overview of the system currently being constructed to perform data distribution as well as the annual campaigns which reprocess the entire image dataset collected since the beginning of the survey. These processing campaigns will utilize computing and storage resources provided by three Rubin data facilities (one in the US and two in Europe). Each year a Data Release will be produced and disseminated to science collaborations for use in studies comprising four main science pillars: probing dark matter and dark energy, taking inventory of solar system objects, exploring the transient optical sky and mapping the Milky Way. Also presented is the method by which we leverage some of the common tools and best practices used for management of large-scale distributed data processing projects in the high energy physics and astronomy communities. We also demonstrate how these tools and practices are utilized within the Rubin project in order to overcome the specific challenges faced by the Observatory.
DESC DC2 Data Release Note LSST Dark Energy Science Collaboration; Abolfathi, Bela; Armstrong, Robert ...
arXiv.org,
06/2022
Paper, Journal Article
Odprti dostop
In preparation for cosmological analyses of the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST), the LSST Dark Energy Science Collaboration (LSST DESC) has created a 300 deg\(^2\) ...simulated survey as part of an effort called Data Challenge 2 (DC2). The DC2 simulated sky survey, in six optical bands with observations following a reference LSST observing cadence, was processed with the LSST Science Pipelines (19.0.0). In this Note, we describe the public data release of the resulting object catalogs for the coadded images of five years of simulated observations along with associated truth catalogs. We include a brief description of the major features of the available data sets. To enable convenient access to the data products, we have developed a web portal connected to Globus data services. We describe how to access the data and provide example Jupyter Notebooks in Python to aid first interactions with the data. We welcome feedback and questions about the data release via a GitHub repository.