Uncertainty quantification plays a critical role in the process of decision making and optimization in many fields of science and engineering. The field has gained an overwhelming attention among ...researchers in recent years resulting in an arsenal of different methods. Probabilistic forecasting and in particular prediction intervals (PIs) are one of the techniques most widely used in the literature for uncertainty quantification. Researchers have reported studies of uncertainty quantification in critical applications such as medical diagnostics, bioinformatics, renewable energies, and power grids. The purpose of this survey paper is to comprehensively study neural network-based methods for construction of prediction intervals. It will cover how PIs are constructed, optimized, and applied for decision-making in presence of uncertainties. Also, different criteria for unbiased PI evaluation are investigated. The paper also provides some guidelines for further research in the field of neural network-based uncertainty quantification.
The rapid growth of the cloud industry has increased challenges in the proper governance of the cloud infrastructure. Many intelligent systems have been developing, considering uncertainties in the ...cloud. Intelligent approaches with the consideration of uncertainties bring optimal management with higher profitability. Uncertainties of different levels and different types exist in various domains of cloud computing. This survey aims to discuss all types of uncertainties and their effect on different components of cloud computing. The article first presents the concept of uncertainty and its quantification. A vast number of uncertain events influence the cloud, as it is connected with the entire world through the internet. Five major uncertain parameters are identified, which are directly affected by numerous uncertain events and affect the performance of the cloud. Notable events affecting major uncertain parameters are also described. Besides, we present notable uncertainty-aware research works in cloud computing. A hype curve on uncertainty-aware approaches in the cloud is also presented to visualize current conditions and future possibilities. We expect the inauguration of numerous uncertainty-aware intelligent systems in cloud management over time. This article may provide a deeper understanding of managing cloud resources with uncertainties efficiently to future cloud researchers.
Continuous advancements of technologies such as machine-to-machine interactions and big data analysis have led to the internet of things (IoT) making information sharing and smart decision-making ...possible using everyday devices. On the other hand, swarm intelligence (SI) algorithms seek to establish constructive interaction among agents regardless of their intelligence level. In SI algorithms, multiple individuals run simultaneously and possibly in a cooperative manner to address complex nonlinear problems. In this paper, the application of SI algorithms in IoT is investigated with a special focus on the internet of medical things (IoMT). The role of wearable devices in IoMT is briefly reviewed. Existing works on applications of SI in addressing IoMT problems are discussed. Possible problems include disease prediction, data encryption, missing values prediction, resource allocation, network routing, and hardware failure management. Finally, research perspectives and future trends are outlined.
Today’s industry has gradually realized the importance of lifting efficiency and saving costs during the life-cycle of an application. In particular, we see that most of the cloud-based applications ...and services often consist of hundreds of micro-services; however, the traditional monolithic pattern is no longer suitable for today’s development life-cycle. This is due to the difficulties of maintenance, scale, load balance, and many other factors associated with it. Consequently, people switch their focus on containerization—a lightweight virtualization technology. The saving grace is that it can use machine resources more efficiently than the virtual machine (VM). In VM, a guest OS is required to simulate on the host machine, whereas containerization enables applications to share a common OS. Furthermore, containerization facilitates users to create, delete, or deploy containers effortlessly. In order to manipulate and manage the multiple containers, the leading Cloud providers introduced the container orchestration platforms, such as Kubernetes, Docker Swarm, Nomad, and many others. In this paper, a rigorous study on Kubernetes from an administrator’s perspective is conducted. In a later stage, serverless computing paradigm was redefined and integrated with Kubernetes to accelerate the development of software applications. Theoretical knowledge and experimental evaluation show that this novel approach can be accommodated by the developers to design software architecture and development more efficiently and effectively by minimizing the cost charged by public cloud providers (such as AWS, GCP, Azure). However, serverless functions are attached with several issues, such as security threats, cold start problem, inadequacy of function debugging, and many other. Consequently, the challenge is to find ways to address these issues. However, there are difficulties and hardships in addressing all the issues altogether. Respectively, in this paper, we simply narrow down our analysis toward the security aspects of serverless. In particular, we quantitatively measure the success probability of attack in serverless (using Attack Tree and Attack–Defense Tree) with the possible attack scenarios and the related countermeasures. Thereafter, we show how the quantification can reflect toward the end-to-end security enhancement. In fine, this study concludes with research challenges such as the burdensome and error-prone steps of setting the platform, and investigating the existing security vulnerabilities of serverless computing, and possible future directions.
The user does not have any idea about the credibility of outcomes from deep neural networks (DNN) when uncertainty quantification (UQ) is not employed. However, current Deep UQ classification models ...capture mostly epistemic uncertainty. Therefore, this paper aims to propose an aleatory-aware Deep UQ method for classification problems. First, we train DNNs through transfer learning and collect numeric output posteriors for all training samples instead of logical outputs. Then we determine the probability of happening a certain class from K-nearest output posteriors of the same DNN in training samples. We name this probability as opacity score, as the paper focuses on the detection of opacity on X-ray images. This score reflects the level of aleatory on the sample. When the NN is certain on the classification of the sample, the probability of happening a class becomes much higher than the probabilities of others. Probabilities for different classes become close to each other for a highly uncertain classification outcome. To capture the epistemic uncertainty, we train multiple DNNs with different random initializations, model selection, and augmentations to observe the effect of these training parameters on prediction and uncertainty. To reduce execution time, we first obtain features from the pre-trained NN. Then we apply features to the ensemble of fully connected layers to get the distribution of opacity score during the test. We also train several ResNet and DenseNet DNNs to observe the effect of model selection on prediction and uncertainty. The paper also demonstrates a patient referral framework based on the proposed uncertainty quantification. The scripts of the proposed method are available at the following link: https://github.com/dipuk0506/Aleatory-aware-UQ.
•Considering asymmetric and heteroscedastic aleatoric uncertainty in image classification.•Consideration of the model-selection uncertainty.•Formulation of relative probability from K-nearest neighbors and relative to realistic probability conversion.•Study on the effect of augmentations on overall uncertainty.•An uncertainty-aware patient referral flow chart.
PurposeThis study aims to propose the adoption of artificial neural network (ANN)-based prediction intervals (PIs) to give more reliable prediction of labour productivity using historical ...data.Design/methodology/approachUsing the proposed PI method, various sources of uncertainty affecting predictions can be accounted for, and a PI is proposed instead of a less reliable single-point estimate. The proposed PI consists of a lower and upper bound in which the realization of the predicted variable, namely, labour productivity, is anticipated to fall with a defined probability and represented in terms of a confidence level (CL).FindingsThe proposed PI method is implemented on a case study project to predict labour productivity. The quality of the generated PIs for the labour productivity is investigated at three confidence levels. The results show that the proposed method can predict the value of labour productivity efficiently.Practical implicationsThis study is the first attempt in construction management to undertake a shift from deterministic point predictions to interval forecasts to improve the reliability of predictions. The proposed PI method will help project managers obtain accurate and credible predictions of labour productivity using historical data. With a better understanding of future outcomes, project managers can adopt appropriate improvement strategies to enhance labour productivity before commencing a project.Originality/valuePoint predictions provided by traditional deterministic ANN-based forecasting methodologies may be unreliable due to the different sources of uncertainty affecting predictions. The current study proposes ANN-based PIs as an alternative and robust tool to give a more reliable prediction of labour productivity using historical data. Using the proposed method, various sources of uncertainty affecting the predictions are accounted for, and a PI is proposed instead of a less reliable single point estimate.
Neural networks (NNs) are extensively used in modelling, optimization, and control of nonlinear plants. NN-based inverse type point prediction models are commonly used for nonlinear process control. ...However, prediction errors (root mean square error (RMSE), mean absolute percentage error (MAPE) etc.) significantly increase in the presence of disturbances and uncertainties. In contrast to point forecast, prediction interval (PI)-based forecast bears extra information such as the prediction accuracy. The PI provides tighter upper and lower bounds with considering uncertainties due to the model mismatch and time dependent or time independent noises for a given confidence level. The use of PIs in the NN controller (NNC) as additional inputs can improve the controller performance. In the present work, the PIs are utilized in control applications, in particular PIs are integrated in the NN internal model-based control framework. A PI-based model that developed using lower upper bound estimation method (LUBE) is used as an online estimator of PIs for the proposed PI-based controller (PIC). PIs along with other inputs for a traditional NN are used to train the PIC to predict the control signal. The proposed controller is tested for two case studies. These include, a chemical reactor, which is a continuous stirred tank reactor (case 1) and a numerical nonlinear plant model (case 2). Simulation results reveal that the tracking performance of the proposed controller is superior to the traditional NNC in terms of setpoint tracking and disturbance rejections. More precisely, 36% and 15% improvements can be achieved using the proposed PIC over the NNC in terms of IAE for case 1 and case 2, respectively for setpoint tracking with step changes.
Currently available uncertainty quantification (UQ) neural networks (NNs) are trained through the statistical error minimization. Therefore, NNs perform poorly for critical input patterns. Some input ...patterns have lower coverage probabilities than others. Such input dependent performance is evident in electricity price prediction where different input features are coming from heterogeneous monitoring sources. This paper proposes a prediction interval (PI) based UQ of the electricity price with the proposed partial adversarial training to achieve the input-domain independent performance. The proposed training consists of initial training, adversarial sample generations from critical samples and a final training with the combined datasets of critical samples and initial training samples. Critical situations are situations where prediction systems struggle to predict and make higher statistical errors. Multiple NNs are first trained with different initializations. Each uncovered sample to the NN pair in the training set generates an adversarial sample. The adversarial dataset is concatenated with the initial samples. The final NN training is performed with the combined dataset. The technique is visualized with rough sketches in both time and input domain. The feasibility and performance are examined on experimental electricity market price datasets.
SpinalNet: Deep Neural Network With Gradual Input Kabir, H M Dipu; Abdar, Moloud; Khosravi, Abbas ...
IEEE transactions on artificial intelligence,
2023-Oct., 2023-10-00, Volume:
4, Issue:
5
Journal Article
Peer reviewed
Open access
Deep neural networks (DNNs) have achieved the state-of-the-art (SOTA) performance in numerous fields. However, DNNs need high computation times, and people always expect better performance in a lower ...computation. Therefore, we study the human somatosensory system and design a neural network (SpinalNet) to achieve higher accuracy with fewer computations. Hidden layers (HLs) in traditional NNs receive inputs in the previous layer, apply activation function, and then transfer the outcomes to the next layer. In the proposed SpinalNet, each layer is split into three splits: 1) input split, 2) intermediate split, and 3) output split. Input split of each layer receives a part of the inputs. The intermediate split of each layer receives outputs of the intermediate split of the previous layer and outputs of the input split of the current layer. The number of incoming weights becomes significantly lower than traditional DNNs. The SpinalNet can also be used as the fully connected or classification layer of DNN and supports both traditional learning and transfer learning. We observe significant error reductions with lower computational costs in most of the DNNs. Traditional learning on the VGG-5 network with SpinalNet classification layers provided the SOTA performance on QMNIST, Kuzushiji-MNIST, and EMNIST (Letters, Digits, and Balanced) datasets. Traditional learning with ImageNet pretrained initial weights and SpinalNet classification layers provided the SOTA performance on STL-10, Fruits 360, Bird225, and Caltech-101 datasets. The scripts of the proposed SpinalNet training are available at the following link: https://github.com/dipuk0506/SpinalNet .