X-ray crystallography is one of the main methods to determine atomic-resolution 3D images of the whole spectrum of molecules ranging from small inorganic clusters to large protein complexes ...consisting of hundred-thousands of atoms that constitute the macromolecular machinery of life. Life is not static, and unravelling the structure and dynamics of the most important reactions in chemistry and biology is essential to uncover their mechanism. Many of these reactions, including photosynthesis which drives our biosphere, are light induced and occur on ultrafast timescales. These have been studied with high time resolution primarily by optical spectroscopy, enabled by ultrafast laser technology, but they reduce the vast complexity of the process to a few reaction coordinates. In the AXSIS project at CFEL in Hamburg, funded by the European Research Council, we develop the new method of attosecond serial X-ray crystallography and spectroscopy, to give a full description of ultrafast processes atomically resolved in real space and on the electronic energy landscape, from co-measurement of X-ray and optical spectra, and X-ray diffraction. This technique will revolutionize our understanding of structure and function at the atomic and molecular level and thereby unravel fundamental processes in chemistry and biology like energy conversion processes. For that purpose, we develop a compact, fully coherent, THz-driven attosecond X-ray source based on coherent inverse Compton scattering off a free-electron crystal, to outrun radiation damage effects due to the necessary high X-ray irradiance required to acquire diffraction signals. This highly synergistic project starts from a completely clean slate rather than conforming to the specifications of a large free-electron laser (FEL) user facility, to optimize the entire instrumentation towards fundamental measurements of the mechanism of light absorption and excitation energy transfer. A multidisciplinary team formed by laser-, accelerator,- X-ray scientists as well as spectroscopists and biochemists optimizes X-ray pulse parameters, in tandem with sample delivery, crystal size, and advanced X-ray detectors. Ultimately, the new capability, attosecond serial X-ray crystallography and spectroscopy, will be applied to one of the most important problems in structural biology, which is to elucidate the dynamics of light reactions, electron transfer and protein structure in photosynthesis.
The CERN Large Hadron Collider (LHC) is designed to collide proton beams of unprecedented energy, in order to extend the frontiers of high-energy particle physics. During the first very successful ...running period in 2010–2013, the LHC was routinely storing protons at 3.5–4 TeV with a total beam energy of up to 146 MJ, and even higher stored energies are foreseen in the future. This puts extraordinary demands on the control of beam losses. An uncontrolled loss of even a tiny fraction of the beam could cause a superconducting magnet to undergo a transition into a normal-conducting state, or in the worst case cause material damage. Hence a multistage collimation system has been installed in order to safely intercept high-amplitude beam protons before they are lost elsewhere. To guarantee adequate protection from the collimators, a detailed theoretical understanding is needed. This article presents results of numerical simulations of the distribution of beam losses around the LHC that have leaked out of the collimation system. The studies include tracking of protons through the fields of more than 5000 magnets in the 27 km LHC ring over hundreds of revolutions, and Monte Carlo simulations of particle-matter interactions both in collimators and machine elements being hit by escaping particles. The simulation results agree typically within a factor 2 with measurements of beam loss distributions from the previous LHC run. Considering the complex simulation, which must account for a very large number of unknown imperfections, and in view of the total losses around the ring spanning over 7 orders of magnitude, we consider this an excellent agreement. Our results give confidence in the simulation tools, which are used also for the design of future accelerators.
The first run of the Large Hadron Collider (LHC) at CERN was very successful and resulted in important physics discoveries. One way of increasing the luminosity in a collider, which gave a very ...significant contribution to the LHC performance in the first run and can be used even if the beam intensity cannot be increased, is to decrease the transverse beam size at the interaction points by reducing the optical function β* . However, when doing so, the beam becomes larger in the final focusing system, which could expose its aperture to beam losses. For the LHC, which is designed to store beams with a total energy of 362 MJ, this is critical, since the loss of even a small fraction of the beam could cause a magnet quench or even damage. Therefore, the machine aperture has to be protected by the collimation system. The settings of the collimators constrain the maximum beam size that can be tolerated and therefore impose a lower limit on β* . In this paper, we present calculations to determine safe collimator settings and the resulting limit on β* , based on available aperture and operational stability of the machine. Our model was used to determine the LHC configurations in 2011 and 2012 and it was found that β* could be decreased significantly compared to the conservative model used in 2010. The gain in luminosity resulting from the decreased margins between collimators was more than a factor 2, and a further contribution from the use of realistic aperture estimates based on measurements was almost as large. This has played an essential role in the rapid and successful accumulation of experimental data in the LHC.
The extreme electromagnetic fields sustained by plasma-based accelerators could drastically reduce the size and cost of future accelerator facilities. However, they are also an inherent source of ...correlated energy spread in the produced beams, which severely limits the usability of these devices. We propose here to split the acceleration process into two plasma stages joined by a magnetic chicane in which the energy correlation induced in the first stage is inverted such that it can be naturally compensated in the second. Simulations of a particular 1.5-m-long setup show that 5.5 GeV beams with relative energy spreads of 1.2×10^{-3} (total) and 2.8×10^{-4} (slice) could be achieved while preserving a submicron emittance. This is at least one order of magnitude below the current state of the art and would enable applications such as compact free-electron lasers.
Plasma wakefield accelerators are capable of sustaining gigavolt-per-centimeter accelerating fields, surpassing the electric breakdown threshold in state-of-the-art accelerator modules by 3-4 orders ...of magnitude. Beam-driven wakefields offer particularly attractive conditions for the generation and acceleration of high-quality beams. However, this scheme relies on kilometer-scale accelerators. Here, we report on the demonstration of a millimeter-scale plasma accelerator powered by laser-accelerated electron beams. We showcase the acceleration of electron beams to 128 MeV, consistent with simulations exhibiting accelerating gradients exceeding 100 GV m
. This miniaturized accelerator is further explored by employing a controlled pair of drive and witness electron bunches, where a fraction of the driver energy is transferred to the accelerated witness through the plasma. Such a hybrid approach allows fundamental studies of beam-driven plasma accelerator concepts at widely accessible high-power laser facilities. It is anticipated to provide compact sources of energetic high-brightness electron beams for quality-demanding applications such as free-electron lasers.
The generation of ultrashort electron bunches with ultrasmall bunch arrival-time jitter is of vital importance for laser-plasma wakefield acceleration with external injection. We study the production ...of 100-MeV electron bunches with bunch durations of subfemtosecond (fs) and bunch arrival-time jitters of less than 10 fs, in an S-band photoinjector by using a weak magnetic chicane with a slit collimator. The beam dynamics inside the chicane is simulated by using two codes with different self-force models. The first code separates the self-force into a three-dimensional (3D) quasistatic space-charge model and a one-dimensional coherent synchrotron radiation (CSR) model, while the other one starts from the first principle with a so-called 3D sub-bunch method. The simulations indicate that the CSR effect dominates the horizontal emittance growth and the 1D CSR model underestimates the final bunch duration and emittance because of the very large transverse-to-longitudinal aspect ratio of the sub-fs bunch. Particularly, the CSR effect is also strongly affected by the vertical bunch size. Due to the coupling between the horizontal and longitudinal phase spaces, the bunch duration at the entrance of the last dipole magnet of the chicane is still significantly longer than that at the exit of the chicane, which considerably mitigates the impact of space charge and CSR effects on the beam quality. Exploiting this effect, a bunch charge of up to 4.8 pC in a sub-fs bunch could be simulated. In addition, we analytically and numerically investigate the impact of different jitter sources on the bunch arrival-time jitter downstream of the chicane, and define the tolerance budgets assuming realistic values of the stability of the linac for different bunch charges and compression schemes.
One source of experimental background in the CERN Large Hadron Collider (LHC) is particles entering the detectors from the machine. These particles are created in cascades, caused by upstream ...interactions of beam protons with residual gas molecules or collimators. We estimate the losses on the collimators with SixTrack and simulate the showers with FLUKA and MARS to obtain the flux and distribution of particles entering the ATLAS and CMS detectors. We consider some machine configurations used in the first LHC run, with focus on 3.5TeV operation as in 2011. Results from FLUKA and MARS are compared and a very good agreement is found. An analysis of logged LHC data provides, for different processes, absolute beam loss rates, which are used together with further simulations of vacuum conditions to normalize the results to rates of particles entering the detectors. We assess the relative importance of background from elastic and inelastic beam–gas interactions, and the leakage out of the LHC collimation system, and show that beam–gas interactions are the dominating source of machine-induced background for the studied machine scenarios. Our results serve as a starting point for the experiments to perform further simulations in order to estimate the resulting signals in the detectors.
•We simulate sources of machine-induced experimental background at the CERN LHC.•We focus on the ATLAS and CMS experiments.•The LHC machine conditions are analyzed to normalize the simulation results.•Beam–gas interactions is found to be the dominating source of particles entering the detectors.
The beam aperture of a particle accelerator defines the clearance available for the circulating beams and is a parameter of paramount importance for the accelerator performance. At the CERN Large ...Hadron Collider (LHC), the knowledge and control of the available aperture is crucial because the nominal proton beams carry an energy of 362 MJ stored in a superconducting environment. Even a tiny fraction of beam losses could quench the superconducting magnets or cause severe material damage. Furthermore, in a circular collider, the performance in terms of peak luminosity depends to a large extent on the aperture of the inner triplet quadrupoles, which are used to focus the beams at the interaction points. In the LHC, this aperture represents the smallest aperture at top-energy with squeezed beams and determines the maximum potential reach of the peak luminosity. Beam-based aperture measurements in these conditions are difficult and challenging. In this paper, we present different methods that have been developed over the years for precise beam-based aperture measurements in the LHC, highlighting applications and results that contributed to boost the operational LHC performance in Run 1 (2010–2013) and Run 2 (2015–2018)