We explore the possible connection between the open cluster IC 2391 and the unbound Argus association identified by the search for associations containing young stars survey. In addition to common ...kinematics and ages between these two systems, here we explore their chemical abundance patterns to confirm if the two substructures shared a common origin. We carry out a homogenous high-resolution elemental abundance study of eight confirmed members of IC 2391 as well as six members of the Argus association using UVES spectra. We derive spectroscopic stellar parameters and abundances for Fe, Na, Mg, Al, Si, Ca, Ti, Cr, Ni and Ba.
All stars in the open cluster and Argus association were found to share similar abundances with the scatter well within the uncertainties, where Fe/H = −0.04 ± 0.03 for cluster stars and Fe/H = −0.06 ± 0.05 for Argus stars. Effects of overionization/excitation were seen for stars cooler than roughly 5200 K as previously noted in the literature. Also, enhanced Ba abundances of around 0.6 dex were observed in both systems. The common ages, kinematics and chemical abundances strongly support the fact that the Argus association stars originated from the open cluster IC 2391. Simple modelling of this system finds this dissolution to be consistent with two-body interactions.
We report results from a high-resolution optical spectroscopic survey aimed to search for nearby young associations and young stars among optical counterparts of ROSAT All-Sky Survey X-ray sources in ...the Southern Hemisphere. We selected 1953 late-type ( B-V similar to \geq similar to 0.6), potentially young, optical counterparts out of a total of 9574 1RXS sources for follow-up observations. At least one high-resolution spectrum was obtained for each of 1511 targets. This paper is the first in a series presenting the results of the SACY survey. Here we describe our sample and our observations. We describe a convergence method in the ( UVW) velocity space to find associations. As an example, we discuss the validity of this method in the framework of the \beta Pic Association.
Context. The young associations offer us one of the best opportunities to study the properties of young stellar and substellar objects and to directly image planets thanks to their proximity (<200pc) ...and age (approximate5-150Myr). However, many previous works have been limited to identifying the brighter, more active members (approximate1M sub(middot in circle)) owing to photometric survey sensitivities limiting the detections of lower mass objects. Aims. We search the field of view of 542 previously identified members of the young associations to identify wide or extremely wide (1000-100000au in physical separation) companions. Methods. We combined 2MASS near-infrared photometry (J, H, K) with proper motion values (from UCAC4, PPMXL, NOMAD) to identify companions in the field of view of known members. We collated further photometry and spectroscopy from the literature and conducted our own high-resolution spectroscopic observations for a subsample of candidate members. This complementary information allowed us to assess the efficiency of our method. Results. We identified 84 targets (45:0.2-1.3M sub(middot in circle), 17:0.08-0.2M sub(middot in circle), 22:<0.08M sub(middot in circle)) in our analysis, ten of which have been identified from spectroscopic analysis in previous young association works. For 33 of these 84, we were able to further assess their membership using a variety of properties (X-ray emission, UV excess, Halpha, lithium and KI equivalent widths, radial velocities, and CaH indices). We derive a success rate of 76-88% for this technique based on the consistency of these properties. Conclusions. Once confirmed, the targets identified in this work would significantly improve our knowledge of the lower mass end of the young associations. Additionally, these targets would make an ideal new sample for the identification and study of planets around nearby young stars. Given the predicted substellar mass of the majority of these new candidate members and their proximity, high-contrast imaging techniques would facilitate the search for new low-mass planets.
Young loose nearby associations are unique samples of close, young pre-main-sequence (PMS) stars. A significant number of members of these associations have been identified in the SACY (search for ...associations containing young stars) collaboration. We can use the proximity and youth of these members to investigate key ingredients in star formation processes, such as multiplicity. With the final goal of better understanding multiplicity properties at different evolutionary stages of PMS stars, we present the statistics of identified multiple systems from 113 confirmed SACY members. We have obtained adaptive-optics assisted near-infrared observations with the Nasmyth Adaptive Optics System and Near- Infrared Imager and Spectrograph (NACO), ESO/VLT, and the Infrared Camera for Adaptive optics at Lick observatory (IRCAL), Lick Observatory, for at least one epoch of all 113 SACY members. Analysis from previous work using tight binaries indicated that the underlying multiple system distribution of the SACY dataset and the young star-forming region Taurus are statistically similar, supporting the idea that these two populations formed in a similar way.
The German CMS community (DCMS) as a whole can benefit from the various compute resources, available to its different institutes. While Grid-enabled and National Analysis Facility resources are ...usually shared within the community, local and recently enabled opportunistic resources like HPC centers and cloud resources are not. Furthermore, there is no shared submission infrastructure available. Via HTCondor's 1 mechanisms to connect resource pools, several remote pools can be connected transparently to the users and therefore used more efficiently by a multitude of user groups. In addition to the statically provisioned resources, also dynamically allocated resources from external cloud providers as well as HPC centers can be integrated. However, the usage of such dynamically allocated resources gives rise to additional complexity. Constraints on access policies of the resources, as well as workflow necessities have to be taken care of. To maintain a well-defined and reliable runtime environment on each resource, virtualization and containerization technologies such as virtual machines, Docker, and Singularity, are used.
Data-intensive end-user analyses in high energy physics require high data throughput to reach short turnaround cycles. This leads to enormous challenges for storage and network infrastructure, ...especially when facing the tremendously increasing amount of data to be processed during High-Luminosity LHC runs. Including opportunistic resources with volatile storage systems into the traditional HEP computing facilities makes this situation more complex. Bringing data close to the computing units is a promising approach to solve throughput limitations and improve the overall performance. We focus on coordinated distributed caching by coordinating workows to the most suitable hosts in terms of cached files. This allows optimizing overall processing efficiency of data-intensive workows and efficiently use limited cache volume by reducing replication of data on distributed caches. We developed a NaviX coordination service at KIT that realizes coordinated distributed caching using XRootD cache proxy server infrastructure and HTCondor batch system. In this paper, we present the experience gained in operating coordinated distributed caches on cloud and HPC resources. Furthermore, we show benchmarks of a dedicated high throughput cluster, the Throughput-Optimized Analysis-System (TOpAS), which is based on the above-mentioned concept.
The heavily increasing amount of data produced by current experiments in high energy particle physics challenge both end users and providers of computing resources. The boosted data rates and the ...complexity of analyses require huge datasets being processed in short turnaround cycles. Usually, data storages and computing farms are deployed by different providers, which leads to data delocalization and a strong influence of the interconnection transfer rates. The CMS collaboration at KIT has developed a prototype enabling data locality for HEP analysis processing via two concepts. A coordinated and distributed caching approach that reduce the limiting factor of data transfers by joining local high performance devices with large background storages were tested. Thereby, a throughput optimization was reached by selecting and allocating critical data within user work-flows. A highly performant setup using these caching solutions enables fast processing of throughput dependent analysis workflows.
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is a general-purpose particle detector and comprises the largest silicon-based tracking system built to date with 75 ...million individual readout channels. The precise reconstruction of particle tracks from this tremendous amount of input channels is a compute-intensive task. The foreseen LHC beam parameters for the next data taking period, starting in 2015, will result in an increase in the number of simultaneous proton-proton interactions and hence the number of particle tracks per event. Due to the stagnating clock frequencies of individual CPU cores, new approaches to particle track reconstruction need to be evaluated in order to cope with this computational challenge. Track finding methods that are based on cellular automata (CA) offer a fast and parallelizable alternative to the well-established Kalman filter-based algorithms. We present a new cellular automaton based track reconstruction, which copes with the complex detector geometry of CMS. We detail the specific design choices made to allow for a high-performance computation on GPU and CPU devices using the OpenCL framework. We conclude by evaluating the physics performance, as well as the computational properties of our implementation on various hardware platforms and show that a significant speedup can be attained by using GPU architectures while achieving a reasonable physics performance at the same time.