An important prerequisite for the simulation-based assessment of energy systems at urban scale is the availability of high-quality, well-formatted and semantically structured data. Unfortunately, ...best practices and state-of-the-art approaches for urban data modelling are hardly applied in the context of energy-related simulations, such that data management and data access often become tedious and cumbersome tasks. This paper presents the so-called Simulation Package, i.e., a data model extending the 3D City Database for CityGML, and its derived data access layer, both aiming to bridge this gap between semantic 3D city modelling and simulation in the context of urban energy systems. The feasibility of this approach is demonstrated with the help of a concrete example, where the proposed extension has been implemented and integrated into a simulation toolchain. The aim is that the availability of a common, shared data model and the proof-of-concept implementation will contribute and foster adoption and further improvement in the future.
The integration of variable renewable resources and decentralized energy technologies generates the need for a larger flexibility of the energy demand. In order to fully deploy a demand side ...management approach, synergies between interconnected energy systems have to be systematically implemented.
By taking this standpoint, this study proposes a new approach to explore the potential of multi-energy integrated energy systems. This approach is constituted by two main steps, which are (1) the performance simulation of selected energy infrastructures and (2) the estimation of related techno-economic performance indicators. Step (1) expands the work presented in previous literature, by including a novel co-simulation feature. In step (2), the levelized cost of energy and location-dependent emission factors are used as key performance indicators.
In this paper, the presented approach is demonstrated by implementing two demand side management options for heat peak demand shaving. A Swedish residential neighborhood is considered as a case study. The first option explores the potential of storing heat in the thermal mass of residential buildings. The proposed strategies lead to a decrease of up to 70% of primary energy consumption, depending on the indoor comfort requirements. The second option estimates the techno-economic feasibility of a new set of scenarios based on the integration of geothermal distributed heat pumps within a district heating network. The district heating scenario is found to be the most techno-economical convenient. Nevertheless, a moderate penetration of distributed heat pumps (around 20%) is shown to have a good trade-off with the reduction of CO2 emissions.
•A techno-economic and environmental approach for multi-energy integrated systems.•Thermal mass control as a solution for reducing the generation cost of heating.•District heating networks as techno-economic optimal urban infrastructures.
A driving force for the realization of a sustainable energy supply in Europe is the integration of distributed, renewable energy resources. Due to their dynamic and stochastic generation behaviour, ...utilities and network operators are confronted with a more complex operation of the underlying distribution grids. Additionally, due to the higher flexibility on the consumer side through partly controllable loads, ongoing changes of regulatory rules, technology developments, and the liberalization of energy markets, the system’s operation needs adaptation. Sophisticated design approaches together with proper operational concepts and intelligent automation provide the basis to turn the existing power system into an intelligent entity, a so-called smart grid. While reaping the benefits that come along with those intelligent behaviours, it is expected that the system-level testing will play a significantly larger role in the development of future solutions and technologies. Proper validation approaches, concepts, and corresponding tools are partly missing until now. This paper addresses these issues by discussing the progress in the integrated Pan-European research infrastructure project ERIGrid where proper validation methods and tools are currently being developed for validating smart grid systems and solutions.
One of the main components of the CMS experiment is the Silicon Tracker. This device, designed to measure the trajectories of charged particles, is composed of approximately 16,000 planar silicon ...detector modules, which makes it the biggest of its kind. However, systematic measurement errors, caused by unavoidable inaccuracies in the construction and assembly phase, reduce the precision of the measurements significantly. The geometrical corrections that are therefore required have to be known to an accuracy that is better than the intrinsic resolution of the detector modules. The Kalman Alignment Algorithm is a novel approach to extract a set of alignment constants from a large collection of recorded particle tracks, and is applicable for a system even as big as the CMS Tracker. To show that the method is functional and well understood, and thus suitable for the data-taking period of the CMS experiment, two case studies are presented and discussed here.
The Kalman alignment algorithm has been specifically developed to cope with the demands that arise from the specifications of the CMS Tracker. The algorithmic concept is based on the Kalman filter ...formalism and is designed to avoid the inversion of large matrices. Most notably, the algorithm strikes a balance between conventional global and local track-based alignment algorithms, by restricting the computation of alignment parameters not only to alignable objects hit by the same track, but also to all other alignable objects that are significantly correlated. Nevertheless, this feature also comes with various trade-offs: Mechanisms are needed that affect which alignable objects are significantly correlated and keep track of these correlations. Due to the large amount of alignable objects involved at each update (at least compared to local alignment algorithms), the time spent for retrieving and writing alignment parameters as well as the required user memory becomes a significant factor. The large-scale test presented here applies the Kalman alignment algorithm to the (misaligned) CMS Tracker barrel, and demonstrates the feasibility of the algorithm in a realistic scenario. It is shown that both the computation time and the amount of required user memory are within reasonable bounds, given the available computing resources, and that the obtained results are satisfactory.
Simulation-driven design has become an important design process in many technological domains. It allows a more rapid deployment of innovative technology in products that have to fulfil high quality ...standards. In view of the success of this approach it is also becoming an increasingly important tool for the development of the future energy system. Due to the size and complexity of such systems this is however a challenging task. Energy systems combine not only a multitude of physical domains, related directly to the processes of generation, storage, distribution and consumption, but will in the future also increasingly rely on communication technologies and software in- frastructure for information exchange and control purposes. This article evaluates two distinct software tools, Simulink/Simscape and Ptolemy II, that nevertheless have the potential to serve as a framework for modelling, simulating and analysing such cyber- physical energy systems.
Results are presented of a search for compositeness in electrons and muons using a data sample of pp collisions at a center-of-mass energy s = 7 TeV collected with the CMS detector at the LHC and ...corresponding to an integrated luminosity of 5.0 fb - 1 . Excited leptons (a a) are assumed to be produced via contact interactions in conjunction with a standard model lepton and to decay via a a a a gamma , yielding a final state with two energetic leptons and a photon. The number of events observed in data is consistent with that expected from the standard model. The 95% confidence upper limits for the cross section for the production and decay of excited electrons (muons), with masses ranging from 0.6 to 2 TeV, are 1.48 to 1.24 fb (1.31 to 1.11 fb). Excited leptons with masses below 1.9 TeV are excluded for the case where the contact interaction scale equals the excited lepton mass. The limits on the cross sections are the most stringent ones published to date.
A measurement of the t t A= production cross section in pp collisions at s = 7 TeV is presented. The results are based on data corresponding to an integrated luminosity of 2.3 fb-1 collected by the ...CMS detector at the LHC. Selected events are required to have one isolated, high transverse momentum electron or muon, large missing transverse energy, and hadronic jets, at least one of which must be consistent with having originated from a b quark. The measured cross section is 158.1 +/- 2.1 (stat .) +/- 10.2 (syst .) +/- 3.5 (lum .) pb , in agreement with standard model predictions.
Many models of new physics, including versions of supersymmetry (SUSY), predict production of events with low missing transverse energy, electroweak gauge bosons, and many energetic final-state ...particles. The stealth SUSY model yields this signature while conserving R-parity by means of a new hidden sector in which SUSY is approximately conserved. The results of a general search for new physics, with no requirement on missing transverse energy, in events with two photons and four or more hadronic jets are reported. The study is based on a sample of protonaproton collisions at s = 7 TeV corresponding to 4.96 fb - 1 of integrated luminosity collected with the CMS detector in 2011. Based on good agreement between the data and the standard model expectation, the data are used to determine model-independent cross-section limits and a limit on the squark mass in the framework of stealth SUSY. With this first study of its kind, squark masses less than 1430 GeV are excluded at the 95% confidence level.