Cryo-electron tomography and subtomogram averaging (STA) has developed rapidly in recent years. It provides structures of macromolecular complexes in situ and in cellular context at or below ...subnanometer resolution and has led to unprecedented insights into the inner working of molecular machines in their native environment, as well as their functional relevant conformations and spatial distribution within biological cells or tissues. Given the tremendous potential of cryo-electron tomography STA in in situ structural cell biology, we previously developed emClarity, a graphics processing unit-accelerated image-processing software that offers STA and classification of macromolecular complexes at high resolution. However, the workflow remains challenging, especially for newcomers to the field. In this protocol, we describe a detailed workflow, processing and parameters associated with each step, from initial tomography tilt-series data to the final 3D density map, with several features unique to emClarity. We use four different samples, including human immunodeficiency virus type 1 Gag assemblies, ribosome and apoferritin, to illustrate the procedure and results of STA and classification. Following the processing steps described in this protocol, along with a comprehensive tutorial and guidelines for troubleshooting and parameter optimization, one can obtain density maps up to 2.8 Å resolution from six tilt series by cryo-electron tomography STA.
Bacterial chemotaxis is a ubiquitous behavior that enables cell movement toward or away from specific chemicals. It serves as an important model for understanding cell sensory signal transduction and ...motility. Characterization of the molecular mechanisms underlying chemotaxis is of fundamental interest and requires a high-resolution structural picture of the sensing machinery, the chemosensory array. In this study, we combine cryo-electron tomography and molecular simulation to present the complete structure of the core signaling unit, the basic building block of chemosensory arrays, from
. Our results provide new insight into previously poorly-resolved regions of the complex and offer a structural basis for designing new experiments to test mechanistic hypotheses.
Gag is the HIV structural precursor protein which is cleaved by viral protease to produce mature infectious viruses. Gag is a polyprotein composed of MA (matrix), CA (capsid), SP1, NC (nucleocapsid), ...SP2 and p6 domains. SP1, together with the last eight residues of CA, have been hypothesized to form a six-helix bundle responsible for the higher-order multimerization of Gag necessary for HIV particle assembly. However, the structure of the complete six-helix bundle has been elusive. Here, we determined the structures of both Gag in vitro assemblies and Gag viral-like particles (VLPs) to 4.2 Å and 4.5 Å resolutions using cryo-electron tomography and subtomogram averaging by emClarity. A single amino acid mutation (T8I) in SP1 stabilizes the six-helix bundle, allowing to discern the entire CA-SP1 helix connecting to the NC domain. These structures provide a blueprint for future development of small molecule inhibitors that can lock SP1 in a stable helical conformation, interfere with virus maturation, and thus block HIV-1 infection.
Bayesian methods are known for treating the so-called data re-assimilation. The Bayesian inference applied to core physics allows to get a new adjustment of nuclear data using the results of integral ...experiments. This theory leading to reassimliation encompasses a broader approach. In previous papers, new methods have been developed to calculate the impact of nuclear and manufacturing data uncertainties on neutronics parameters. Usually, adjustment is performed step by step with one parameter and one experiment by batch. In this document, we rewrite Orlov theory to extend to multiple experimental values and parameters adjustment. We found that the multidimensional system expression looks like can be written as the monodimensional system in a matrix form. In this extension, correlation terms appears between experimental processes (manufacturing and measurements) and we discuss how to fix them. Then formula are applied to the extension to the Boltzmann/Bateman coupled problem, where each term could be evaluated by computing depletion uncertainties, studied in previous papers.
A nuclear data-based uncertainty propagation methodology is extended to enable propagation of manufacturing/technological data (TD) uncertainties in a burn-up calculation problem, taking into account ...correlation terms between Boltzmann and Bateman terms. The methodology is applied to reactivity and power distributions in a Material Testing Reactor benchmark. Due to the inherent statistical behavior of manufacturing tolerances, Monte Carlo sampling method is used for determining output perturbations on integral quantities. A global sensitivity analysis (GSA) is performed for each manufacturing parameter and allows identifying and ranking the influential parameters whose tolerances need to be better controlled. We show that the overall impact of some TD uncertainties, such as uranium enrichment, or fuel plate thickness, on the reactivity is negligible because the different core areas induce compensating effects on the global quantity. However, local quantities, such as power distributions, are strongly impacted by TD uncertainty propagations. For isotopic concentrations, no clear trends appear on the results.
Full text
Available for:
FZAB, GIS, IJS, IZUM, KILJ, NLZOH, NUK, OILJ, PILJ, PNG, SAZU, SBCE, SBMB, UL, UM, UPUK
IAEA's (International Atomic Energy Agency) publication SSG-26 defines a methodology for calculating A1/A2 values. These values were conceived as limits for the transport of radioactive goods, to ...limit the public's exposure to radiation in the event of an accident. The limits ensure people involved in an accident receive an effective dose of no more than 50 mSv and a skin equivalent dose no greater than 500 mSv. The current values are based on five exposure scenarios taken from the Q-System, described in 1996. In 2013, the IAEA commissioned an international working group to improve the Q-System and calculate new limits for the transport of radioactive material. Within this working group, CERN has developed a set of models and an associated mathematical framework, and compiled them in a single piece of software. The primary purpose of the software is to compute and compare values produced by the different models under discussion. Later, the software could be distributed in a lighter version which will include the agreed upon regulatory model to determine the A1/A2 values.
The precise estimation of Pearsons correlations, also called “representativity” coefficients, between core configurations is a fundamental quantity for properly assessing the nuclear data (ND) ...uncertainties propagation on integral parameters such as k-eff, power distributions, or reactivity coefficients. In this paper, a traditional adjoint method is used to propagate ND uncertainty on reactivity and reactivity coefficients and estimate correlations between different states of the core. We show that neglecting those correlations induces a loss of information in the final uncertainty. We also show that using approximate values of Pearson does not lead to an important error of the model. This calculation is made for reactivity at the beginning of life and can be extended to other parameters during depletion calculations.
Abstract
Dose equivalent limits for single organs are recommended by the ICRP (International Commission for the Radiological Protection publication 103). These limits do not lend themselves to be ...measured. They are assessed by convoluting conversion factors with particle fluences. The Fluence-to-Dose conversion factors are tabulated in the ICRP literature. They allow assessing the organ dose of interest using numerical simulations. In particular, the literature lacks the knowledge of local skin equivalent dose (LSD) coefficients for neutrons. In this article, we compute such values for neutron energies ranging from 1 meV to 15 MeV. We use FLUKA, MCNP and GEANT4 Radiation transport Monte-Carlo simulation codes to perform the calculations. A comparison between these three codes is performed. These calculated values are important for radiation protection studies and radiotherapy applications.
In the frame of maintenance, upgrade and dismantling activities, activated equipment are removed from the accelerator complex and require characterization in view of their disposal as radioactive ...waste. The characterization process consists of a series of radiation measurements, complemented by analytical studies, which quantify the activity of radionuclides inside an object. A fraction of the radioactive waste produced at CERN presents contact dose-rates higher than 100 μSv/h, and can therefore be classified as LILW Waste (“Low and intermediate level radioactive waste”). These objects, due to the activation mechanisms, are often subject to large activity heterogeneities. The quantification of gamma-emitting radionuclides is typically performed by gamma spectrometry, under the assumption of homogeneous distributions of activity within an object. However, this assumption can lead to underestimating the activity value of such radionuclides. In this article we perform a gamma spectrometry qualification in order to quantify the impact of assuming homogenous distribution.
•γ-spectrometry qualification for activity measurement of intermediate-level waste produced in particle accelerators.•Quantification of activity heterogeneity effects in radioactive waste.•Multiple detector counting to reduce gamma spectrometry uncertainties and perform geometry model optimization.•Efficiency calibration with optimized geometries.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Material activation can sometimes cause large heterogeneities in the distribution of radioactivity (hotspots). Moreover, the sample geometry parameters are not always well known. When performing ...gamma-spectroscopy to quantify the radionuclide inventory in activated materials, often predefined models are used to represent the sample geometry (dimensions, source-to-detector distance, material type) and their activity distribution, for efficiency calibration. This simplification causes uncertainties of the efficiency curves associated with the model and consequently, to the activity results. In this paper, we develop a new approach, based on ISOCS/LabSOCS to quantify and reduce uncertainties originating from the geometry model. The theory is described in this document and an experimental case is discussed.
•Gamma-spectroscopy geometric model uncertainty quantification.•Gamma-spectroscopy geometric model uncertainty optimization.•Multi-count and activity consistency.•Efficiency calibration.•High purity germanium detector.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP