We propose a novel framework for uncertainty quantification via information bottleneck (IB-UQ) for scientific machine learning tasks, including deep neural network (DNN) regression and neural ...operator learning (DeepONet). Specifically, we incorporate the bottleneck by a confidence-aware encoder, which encodes inputs into latent representations according to the confidence of the input data belonging to the region where training data is located, and utilize a Gaussian decoder to predict means and variances of outputs conditional on representation variables. Furthermore, we propose a data augmentation based information bottleneck objective which can enhance the quantification quality of the extrapolation uncertainty, and the encoder and decoder can be both trained by minimizing a tractable variational bound of the objective. In comparison to uncertainty quantification (UQ) methods for scientific learning tasks that rely on Bayesian neural networks with Hamiltonian Monte Carlo posterior estimators, the model we propose is computationally efficient, particularly when dealing with large-scale datasets. The effectiveness of the IB-UQ model has been demonstrated through several representative examples, such as regression for discontinuous functions, learning nonlinear operators for partial differential equations, and a large-scale climate map model. The experimental results indicate that the IB-UQ model can handle noisy data, generate robust predictions, and provide confident uncertainty evaluation for out-of-distribution data.
•Identify, formulate, and solve optimal location problem for VSL application areas. Formulation and solution for lane-drop, sag, and tunnel bottlenecks.•Lighthill-Whitham Richards model with bounded ...acceleration (BA-LWR) is used.•Capacity drop formation depends on speed limits and VSL locations.•Define admissible conditions for congested and uncongested stationary states.•Optimal distances of VSL application areas from bottlenecks increase in speed limits.
Some studies consider variable speed limit (VSL) control as a viable option to prevent traffic breakdown at bottlenecks by limiting the mainline flow with reduced speed limits. However, few studies consider the location of the application area as a design variable of the problem. This paper explains why the location of a VSL control area is crucial to prevent the capacity drop phenomenon at lane drop bottlenecks. We first define two types of stationary states, congested and uncongested, inside a lane drop bottleneck assuming the Lighthill-Whitham Richards model with bounded acceleration. In particular, the characteristics of these stationary states and their admissible conditions are discussed thoroughly. If the speed limit imposed is low enough, the location of the VSL application area is irrelevant to ensure an uncongested stationary state inside the bottleneck. However, for a given range of speed limits, the location of the VSL application area should be designed carefully to allow for uncongested stationary states and prevent the occurrence of the capacity drop. We formulate an optimization problem and show that, contrary to the general belief, the larger the speed limit, the farther the VSL application area should be from the bottleneck. Finally, the results are extended to other types of bottlenecks, such as sag or tunnel bottlenecks. To the best of our knowledge, this is the first study to analytically identify, formulate, and solve the optimal location problem for variable speed limit application areas. It makes fundamental contributions to both traffic flow theory (by analyzing the stationary states for VSL-controlled bottlenecks) and traffic control (by determining the optimal location of a VSL application area). Moreover, the results presented are of practical relevance because they can help to establish some guidelines for practitioners to implement VSL control strategies.
The hot phonon bottleneck has been under intense investigation in perovskites. In the case of perovskite nanocrystals, there may be hot phonon bottlenecks as well as quantum phonon bottlenecks. While ...they are widely assumed to exist, evidence is growing for the breaking of potential phonon bottlenecks of both forms. Here, we perform state-resolved pump/probe spectroscopy (SRPP) and time-resolved photoluminescence spectroscopy (t-PL) to unravel hot exciton relaxation dynamics in model systems of bulk-like 15 nm nanocrystals of CsPbBr
and FAPbBr
, with FA being formamidinium. The SRPP data can be misinterpreted to reveal a phonon bottleneck even at low exciton concentrations, where there should be none. We circumvent that spectroscopic problem with a state-resolved method that reveals an order of magnitude faster cooling and breaking of the quantum phonon bottleneck that might be expected in nanocrystals. Since the prior pump/probe methods of analysis are shown to be ambiguous, we perform t-PL experiments to unambiguously confirm the existence of hot phonon bottlenecks as well. The t-PL experiments reveal there is no hot phonon bottleneck in these perovskite nanocrystals.
molecular dynamics simulations reproduce experiments by inclusion of efficient Auger processes. This experimental and theoretical work reveals insight on hot exciton dynamics, how they are precisely measured, and ultimately how they may be exploited in these materials.
Few complete human genomes from the European Early Upper Palaeolithic (EUP) have been sequenced. Using novel sampling and DNA extraction approaches, we sequenced the genome of a woman from “Peştera ...Muierii,” Romania who lived ∼34,000 years ago to 13.5× coverage. The genome shows similarities to modern-day Europeans, but she is not a direct ancestor. Although her cranium exhibits both modern human and Neanderthal features, the genome shows similar levels of Neanderthal admixture (∼3.1%) to most EUP humans but only half compared to the ∼40,000-year-old Peştera Oase 1. All EUP European hunter-gatherers display high genetic diversity, demonstrating that the severe loss of diversity occurred during and after the Last Glacial Maximum (LGM) rather than just during the out-of-Africa migration. The prevalence of genetic diseases is expected to increase with low diversity; however, pathogenic variant load was relatively constant from EUP to modern times, despite post-LGM hunter-gatherers having the lowest diversity ever observed among Europeans.
•Peştera Muierii woman is related to Europeans, but she is not a direct ancestor•Reduced diversity in Europe caused by Last Glaciation, not out-of-Africa bottleneck•Genetic load appears indifferent across 40,000 years of European history•New DNA extraction approach recovers up to 33 times more DNA from ancient remains
Svensson et al. sequence the complete genome of a woman from “Peştera Muierii,” Romania, who lived 34,000 years ago. Her genome is similar to modern-day Europeans, but she is not a direct ancestor. Her genome shows high levels of diversity, revealing that much loss of diversity in non-Africans occurred after she lived rather than before her time.
In this paper we investigate a bottleneck model in which the capacity of the bottleneck is assumed stochastic and follows a uniform distribution. The commuters' departure time choice is assumed to ...follow the user equilibrium principle according to mean trip cost. The analytical solution of the proposed model is derived. Both the analytical and numerical results show that the capacity variability would indeed change the commuters' travel behavior by increasing the mean trip cost and lengthening the peak period. We then design congestion pricing schemes within the framework of the new stochastic bottleneck model, for both a time-varying toll and a single-step coarse toll, and prove that the proposed piecewise time-varying toll can effectively cut down, and even eliminate, the queues behind the bottleneck. We also find that the single-step coarse toll could either advance or postpone the earliest departure time. Furthermore, the numerical results show that the proposed pricing schemes can indeed improve the efficiency of the stochastic bottleneck through decreasing the system’s total travel cost.
Celotno besedilo
Dostopno za:
BFBNIB, DOBA, IZUM, KILJ, NMLJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only ...sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.
In order to meet stringent performance requirements, system administrators must effectively detect undesirable performance behaviours, identify potential root causes, and take adequate corrective ...measures. The problem of uncovering and understanding performance anomalies and their causes (bottlenecks) in different system and application domains is well studied. In order to assess progress, research trends, and identify open challenges, we have reviewed major contributions in the area and present our findings in this survey. Our approach provides an overview of anomaly detection and bottleneck identification research as it relates to the performance of computing systems. By identifying fundamental elements of the problem, we are able to categorize existing solutions based on multiple factors such as the detection goals, nature of applications and systems, system observability, and detection methods.
The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties. We show that this can ...be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common practice of dropout. We show that our regularized loss function can be efficiently minimized using Information Dropout, a generalization of dropout rooted in information theoretic principles that automatically adapts to the data and can better exploit architectures of limited capacity. When the task is the reconstruction of the input, we show that our loss function yields a Variational Autoencoder as a special case, thus providing a link between representation learning, information theory and variational inference. Finally, we prove that we can promote the creation of optimal disentangled representations simply by enforcing a factorized prior, a fact that has been observed empirically in recent work. Our experiments validate the theoretical intuitions behind our method, and we find that Information Dropout achieves a comparable or better generalization performance than binary dropout, especially on smaller models, since it can automatically adapt the noise to the structure of the network, as well as to the test sample.
•This paper investigates how spatially distributed lane changes impact capacity-drop at extended merge, diverge, and weave bottlenecks.•A hybrid approach is developed for tractability: analytical ...models to capture the impacts of lane changes and numerical simulations to quantify capacity-drop.•The effects of various factors on the bottleneck discharge rate and their mechanisms are investigated.
This paper investigates the mechanisms of how spatially distributed lane changes (LCs) interact and contribute to “capacity-drop” at three types of extended bottlenecks: merge, diverge, and weave. A hybrid approach is used to study the problem: analytical approach to capture the behavior of merging and diverging LCs and numerical simulations to quantify capacity-drop considering various geometric configurations of extended bottlenecks. This study focuses on the impact of LC vehicles’ bounded acceleration on “void” (wasted space) creation in traffic streams when they insert/desert at a lower speed, and interactions among multiple voids. We found that (1) LCs closer to the downstream end of bottlenecks are more likely to create persisting voids and contribute to capacity-drop. (2) For weave bottlenecks, capacity-drop is governed by two counteracting effects of LCs: persisting voids and utilization of vacancies created by diverging vehicles; (3) the more balanced the merging and diverging flows, the lower the capacity-drop; and (4) capacity-drop is minimum if merging LCs occur downstream of diverging LCs, and maximum in the opposite alignment.