•OpenMC is an open source Monte Carlo particle transport code.•Solid geometry and continuous-energy physics allow high-fidelity simulations.•Development has focused on high performance and modern I/O ...techniques.•OpenMC is capable of scaling up to hundreds of thousands of processors.•Other features include plotting, CMFD acceleration, and variance reduction.
This paper gives an overview of OpenMC, an open source Monte Carlo particle transport code recently developed at the Massachusetts Institute of Technology. OpenMC uses continuous-energy cross sections and a constructive solid geometry representation, enabling high-fidelity modeling of nuclear reactors and other systems. Modern, portable input/output file formats are used in OpenMC: XML for input, and HDF5 for output. High performance parallel algorithms in OpenMC have demonstrated near-linear scaling to over 100,000 processors on modern supercomputers. Other topics discussed in this paper include plotting, CMFD acceleration, variance reduction, eigenvalue calculations, and software development processes.
Transition-metal nanoparticles possess unique size-dependent optical, electronic, and catalytic properties on the nanoscale, which differ significantly from their bulk properties. In particular, ...palladium (Pd) nanoparticles have properties applicable to a wide range of applications in catalysis and electronics. However, predictable and controllable nanoparticle synthesis remains challenging because of harsh reaction conditions, artifacts from capping agents, and unpredictable growth. Biological supramolecules offer attractive templates for nanoparticle synthesis because of their precise structure and size. In this article, we demonstrate simple, controllable Pd nanoparticle synthesis on surface-assembled viral nanotemplates. Specifically, we exploit precisely spaced thiol functionalities of genetically modified tobacco mosaic virus (TMV1cys) for facile surface assembly and readily controllable Pd nanoparticle synthesis via simple electroless deposition under mild aqueous conditions. Atomic force microscopy (AFM) studies clearly show tunable surface assembly and Pd nanoparticle formation preferentially on the TMV1cys templates. Grazing incidence small-angle X-ray scattering (GISAXS) further provided an accurate and statistically meaningful route by which to investigate the broad size ranges and uniformity of the Pd nanoparticles formed on TMV templates by simply tuning the reducer concentration. We believe that our viral-templated bottom-up approach to tunable Pd nanoparticle formation combined with the first in-depth characterization via GISAXS represents a major advancement toward exploiting viral templates for facile nanomaterials/device fabrication. We envision that our strategy can be extended to a wide range of applications, including uniform nanostructure and nanocatalyst synthesis.
We demonstrate hierarchical assembly of tobacco mosaic virus (TMV)-based nanotemplates with hydrogel-based encoded microparticles via nucleic acid hybridization. TMV nanotemplates possess a highly ...defined structure and a genetically engineered high density thiol functionality. The encoded microparticles are produced in a high throughput microfluidic device via stop-flow lithography (SFL) and consist of spatially discrete regions containing encoded identity information, an internal control, and capture DNAs. For the hybridization-based assembly, partially disassembled TMVs were programmed with linker DNAs that contain sequences complementary to both the virus 5′ end and a selected capture DNA. Fluorescence microscopy, atomic force microscopy (AFM), and confocal microscopy results clearly indicate facile assembly of TMV nanotemplates onto microparticles with high spatial and sequence selectivity. We anticipate that our hybridization-based assembly strategy could be employed to create multifunctional viral-synthetic hybrid materials in a rapid and high-throughput manner. Additionally, we believe that these viral-synthetic hybrid microparticles may find broad applications in high capacity, multiplexed target sensing.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 151-158).
...Monte Carlo (MC) neutral particle transport methods have long been considered the gold-standard for nuclear simulations, but high computational cost has limited their use significantly. However, as we move towards higher-fidelity nuclear reactor analyses the method has become competitive with traditional deterministic transport algorithms for the same level of accuracy, especially considering the inherent parallelism of the method and the ever-increasing concurrency of modern high performance computers. Yet before such analysis can be practical, several algorithmic challenges must be addressed, particularly in regards to the memory requirements of the method. In this thesis, a robust domain decomposition algorithm is proposed to alleviate this, along with models and analysis to support its use for full-scale reactor analysis. Algorithms were implemented in the full-physics Monte Carlo code OpenMC, and tested for a highly-detailed PWR benchmark: BEAVRS. The proposed domain decomposition implementation incorporates efficient algorithms for scalable inter-domain particle communication in a manner that is reproducible with any pseudo-random number seed. Algorithms are also proposed to scalably manage material and tally data with on-the-fly allocation during simulation, along with numerous optimizations required for scalability as the domain mesh is refined and divided among thousands of compute processes. The algorithms were tested on two supercomputers, namely the Mira Blue Gene/Q and the Titan XK7, demonstrating good performance with realistic tallies and materials requiring over a terabyte of aggregate memory. Performance models were also developed to more accurately predict the network and load imbalance penalties that arise from communicating particles between distributed compute nodes tracking different spatial domains. These were evaluated using machine properties and tallied particle movement characteristics, and empirically validated with observed timing results from the new implementation. Network penalties were shown to be almost negligible with per-process particle counts as low as 1000, and load imbalance penalties higher than a factor of four were not observed or predicted for finer domain meshes relevant to reactor analysis. Load balancing strategies were also explored, and intra-domain replication was shown to be very effective at improving parallel efficiencies without adding significant complexity to the algorithm or burden to the user. Performance of the strategy was quantified with a performance model, and shown to agree well with observed timings. Imbalances were shown to be almost completely removed for the finest domain meshes. Finally, full-core studies were carried out to demonstrate the efficacy of domain-decomposed Monte Carlo in tackling the full scope of the problem. A detailed mesh required for a robust depletion treatment was used, and good performance was demonstrated for depletion tallies with 206 nuclides. The largest runs scored six reaction rates for each nuclide in 51M regions for a total aggregate memory requirement of 1.4TB, and particle tracking rates were consistent with those observed for smaller non-domain- decomposed runs with equivalent tally complexity. These types of runs were previously not achievable with traditional Monte Carlo methods, and can be accomplished with domain decomposition with between 1.4x and 1.75x overhead with simple load balancing.
by Nicholas Edward Horelik.
Ph. D.
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 88-90).
Studies are ...underway in support of the MIT research reactor (MITR-II) conversion from high enriched Uranium (HEU) to low enriched Uranium (LEU), as required by recent non-proliferation policy. With the same core configuration and similar assembly type, high-density monolithic U-Mo fuel will replace the current HEU fuel with comparable performance. Part of the required analysis for relicensing includes detailed fuel management and burnup studies with the new LEU fuel, to be carried out with a recently developed fuel management tool called MCODE-FM. This code-package is a Python wrapper enabling automatic fuel shuffling between successive runs of MIT's MCODE, which couples MCNP with ORIGEN for full-core neutronics and depletion. In this work, the capabilities of MCODE have been expanded, and the effects of depletion mesh parameters have been explored. Several features have been added to the fuel management tool to encompass the the full range of fuel management options needed for detailed analysis, including assembly flipping, rotation, and temporary storage above the core. In addition, an option to easily manage experiments and custom dummy elements has been added, and a parallel version of MCODE for MCODE-FM that better handles finer discretizations of full-core runs has been developed. These changes have been made in the main wrapper utility as well as the graphical user interface (GUI). In addition to the new MCODE-FM capabilities, a suite of automatic data analysis utilities were developed to consistently parse results. These include utilities to extract or calculate isotope data, fission powers, blade heights, peaking factors, and 3D VTK files for visualization at any time step. The suite has been developed as a series of Python scripts, accessible also through the MCODE-FM GUI. Finally, the effects of the spatial discretization parameters for the depletion mesh have been explored, and mesh choice recommendations have been made for different types of studies. In summary, coarser meshes in the radial and lateral dimensions have been found to yield conservative power peaking results, whereas a finer axial mesh is needed axially. Thus for iterative fuel management studies a fast-running depletion mesh of 8 axial regions, 3 radial regions, and 1 lateral region can be used. However, for safety studies and benchmarking that only need to run once or twice, 16 axial regions, 15 or 18 radial regions (HEU or LEU, respectively), and 4 lateral regions should be used.
by Nicholas E. Horelik.
S.M.