ENABLING CITY DIGITAL TWINS THROUGH URBAN LIVING LABS Hristov, P. O.; Petrova-Antonova, D.; Ilieva, S. ...
International archives of the photogrammetry, remote sensing and spatial information sciences.,
05/2022, Letnik:
XLIII-B1-2022
Journal Article, Conference Proceeding
Recenzirano
Odprti dostop
The population density in urban areas is rapidly rising, leading to a constant need for new infrastructure and services for citizens. To reduce the time to implementation and optimise the monetary ...cost of various solutions, the plans and policies of local authorities and stakeholders would benefit from undergoing a series of virtual stress tests. To this end, prescriptive and predictive technologies are widely adopted to optimise city planning and to understand the urban processes and environment such as air pollution and transportation. Nevertheless, holistic sandboxes tightly integrated with cities are still largely lacking. The city digital twin is a promising concept that provides a tool for exploration of new solutions in a controlled environment before their deployment. The digital twin is a virtual replica of the real city, which collects data from the infrastructure, processes and services using not only the available systems, but also purposely built connected devices and sensors. In this context, the establishment of urban living labs facilitates the monitoring and understanding of urban processes and enriches the digital twin with highly-relevant data. This paper presents an urban living lab, under deployment in the district of Lozenets in Sofia, Bulgaria. It is part of a larger initiative for developing a city digital twin of Sofia to support the design, exploration, and experimentation of different solutions. The living lab is equipped with sensors for monitoring air quality, atmospheric parameters, noise pollution and pedestrian flows. In addition, a Light Detection and Ranging (LiDAR) system is realised as an edge computing facility at one of the busiest intersections of the district. Along with the equipment, the paper describes the architecture and components of the platform for data collection, storage, processing, and visualization. Finally, high-priority studies are presented, and their demographic and economic impact is discussed.
•A comprehensive framework for engineering design under uncertainty.•Focus on limited and partial information, time series, and black-box models.•Calibration, sensitivity analysis and RBDO performed ...with mixed uncertainty.•Efficient sampling of complex uncertainty models using sliced normal distributions.•Framework is demonstrated on the 2020 NASA challenge on design under uncertainty.
In this paper we present a framework for addressing a variety of engineering design challenges with limited empirical data and partial information. This framework includes guidance on the characterisation of a mixture of uncertainties, efficient methodologies to integrate data into design decisions, and to conduct reliability analysis, and risk/reliability based design optimisation. To demonstrate its efficacy, the framework has been applied to the NASA 2020 uncertainty quantification challenge. The results and discussion in the paper are with respect to this application.
The NA48/2 experiment at CERN collected a large sample of charged kaon decays to final states with multiple charged particles in 2003–2004. A new upper limit on the rate of the lepton number ...violating decay K±→π∓μ±μ± is reported: B(K±→π∓μ±μ±)<8.6×10−11 at 90% CL. Searches for two-body resonances X in K±→πμμ decays (such as heavy neutral leptons N4 and inflatons χ) are also presented. In the absence of signals, upper limits are set on the products of branching fractions B(K±→μ±N4)B(N4→πμ) and B(K±→π±X)B(X→μ+μ−) for ranges of assumed resonance masses and lifetimes. The limits are in the (10−11,10−9) range for resonance lifetimes below 100 ps.
Abstract
The higher efficiency of tracking photovoltaic systems is an indisputable fact and computational methods of the respective energy gains are well known. However the investments in the ...tracking systems are higher in comparison to the fixed ones due to the needs of additional equipment and automation systems. An options of periodical change of the azimuth of PV modules at single axis solar trackers and the correspondent incident solar energy are studied in this paper. An approach for numerical determination of the daily periods of cyclic rotations of the trackers and the subsequent electricity production is developed. It was applied to estimate the energy gains at such cyclic changes of a PV system in ceramic factory in Bulgaria.
A
bstract
A measurement of the form factors of charged kaon semileptonic decays is presented, based on 4.4 × 10
6
K
±
→
π
0
e
±
ν
e
(
K
e
3
±
) and 2.3 × 10
6
K
±
→
π
0
μ
±
ν
μ
(
K
μ
3
±
) decays ...collected in 2004 by the NA48/2 experiment. The results are obtained with improved precision as compared to earlier measurements. The combination of measurements in the
K
e
3
±
and
K
μ
3
±
modes is also presented.
•Computational resources are used efficiently through combination of subset simulations and Gaussian process emulation.•Multimodal failure domains are efficiently handled with the use of clustering ...techniques.•The selection of new points, duration of learning and emulator quality are controlled adaptively.•The algorithm is readily extensible for use in standard and reliability-based design optimisation.
This paper presents an approximation method for performing efficient reliability analysis with complex computer models. The computational cost of industrial-scale models can cause problems when performing sampling-based reliability analysis. This is due to the fact that the failure modes of the system typically occupy a small region of the performance space and thus require relatively large sample sizes to accurately estimate their characteristics. The sequential sampling method proposed in this article, combines Gaussian process-based optimisation and subset simulation. Gaussian process emulators construct a statistical approximation to the output of the original code, which is both affordable to use and has its own measure of predictive uncertainty. Subset simulation is used as an integral part of the algorithm to efficiently populate those regions of the surrogate which are likely to lead to the performance function exceeding a predefined critical threshold. The emulator itself is used to inform decisions about efficiently using the original code to augment its predictions. The iterative nature of the method ensures that an arbitrarily accurate approximation of the failure region is developed at a reasonable computational cost. The presented method is applied to an industrial model of a biodiesel filter.
The Western honey bee (Apis mellifera L., Hymenoptera: Apidae) is a species of fundamental economic, agricultural and environmental importance. The aim of this study was to compare the prevalence of ...some parasitic and viral pathogens in local honey bees from the Rodope Mountains and plain regions. To achieve this goal, molecular screening for two of the most distributed Nosema spp. and molecular identification of six honey bee viruses – Deformed wing virus (DWV), Acute bee paralysis virus (ABPV), Chronic bee paralysis virus (CBPV), Sacbrood virus (SBV), Kashmir bee virus (KBV), and Black queen cell virus (BQCV) was performed. Molecular analysis was carried out on 168 honey bee samples from apiaries situated in three different parts of the country where a mix of different honey bee subspecies were reared. In South Bulgaria (the Rhodope Mountains), a local honey bee called Apis mellifera rodopica (a local ecotype of A. m. macedonica) was bred, while in the other two regions (plains) different introduced subspecies existed. The results showed that the samples from the lowland regions in the country were outlined with the highest prevalence (70.5%) of N. ceranae, while those from the mountainous parts had the lowest rate (5.2%). Four of the honey bee viruses were identified – DWV (10/5.9%), followed by SBV (6/3.6%) and ABPV (2/1.2%), and one case of BQCV. In conclusion, the local honey bee A. m. rodopica (despite the higher number of samples) has shown lower prevalence of both nosemosis and viral infections. Therefore, this honey bee has to be preserved as a part of the national biodiversity.
ALFA: The new ALICE-FAIR software framework Al-Turany, M.; Buncic, P.; Hristov, P. ...
Journal of physics. Conference series,
12/2015, Letnik:
664, Številka:
7
Journal Article
Recenzirano
Odprti dostop
The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of large parts of a common software framework in an experiment independent way. The ...FairRoot project has already shown the feasibility of such an approach for the FAIR experiments and extending it beyond FAIR to experiments at other facilities1, 2. The ALFA framework is a joint development between ALICE Online- Offline (O2) and FairRoot teams. ALFA is designed as a flexible, elastic system, which balances reliability and ease of development with performance using multi-processing and multithreading. A message- based approach has been adopted; such an approach will support the use of the software on different hardware platforms, including heterogeneous systems. Each process in ALFA assumes limited communication and reliance on other processes. Such a design will add horizontal scaling (multiple processes) to vertical scaling provided by multiple threads to meet computing and throughput demands. ALFA does not dictate any application protocols. Potentially, any content-based processor or any source can change the application protocol. The framework supports different serialization standards for data exchange between different hardware and software languages.
The Large Hadron Collider (LHC)' operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the ...fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.