Currently, most dehazing-based deep approaches are developed as an end-to-end manner to reconstruct a degraded image in an unintelligible fashion. For these dehazing models, the absence of ‘Credible ...Modeling’ to guide the network design is a barrier to be applied commercially in the open-world. To address this problem, we introduce the Taylor approximation principle as the soul and materialize this principle with the help of the Laplacian Pyramid. Specifically, we assume that the N paths of Laplacian pyramid model correspond to the N terms (sub-functions) in Taylor’s theorem. We attempt to use bottom paths and top paths to reconstruct the low-/high-frequency information of the clear image, respectively. Further, we develop a T-Unet module that focuses on regularizing the feature maps generated at bottom paths and design an attention sharing weight K to help approximate the Taylor high-order terms. Extensive experimental results demonstrate that our approach can run a 4K image on a single GPU with 24G RAM in real-time (80fps) and have unparalleled interpretability. The url of our code at https://github.com/zzr-idam/Interpretable-Pyramid-Network.
•We expands Laplacian pyramid model with Taylor’s theorem, ensuring improvement.•We uses attention weight K to approximate Taylor terms and reduce overhead.•T-Unet module reduces noise from data streaming successfully.•We add 3000 image pairs to the 4KID dataset.
We consider a distribution system in which retailers replenish perishable goods from a warehouse, which, in turn, replenishes from an outside source. Demand at each retailer depends on exogenous ...features and a random shock, and unfulfilled demand is lost. The objective is to obtain a data-driven replenishment and allocation policy that minimizes the average inventory cost per time period. The extant data-driven methods either cannot guarantee a feasible solution for out-of-sample feature observations or generate one with excessive computational time. We propose a policy that resolves these issues in two steps. In the first step, we assume that the distributions of features and random shocks are known. We develop an effective heuristic policy by using Taylor expansion to approximate the retailer’s inventory cost. The resulting solution is closed-form, referred to as Taylor Approximation (TA) policy. We show that the TA policy is asymptotically optimal in the number of retailers. In the second step, we apply the linear quantile regression and kernel density estimation to the TA solution to obtain the data-driven policy called Data-Driven Taylor Approximation (DDTA) policy. We prove that the DDTA policy is consistent with the TA policy. A numerical study shows that the DDTA policy is very effective. Using a real data set provided by Fresh Hema, we show that the DDTA policy reduces the average cost by 11.0% compared with Hema’s policy. Finally, we show that the main results still hold in the cases of correlated demand features, positive lead times, and censored demand. This paper was accepted by J. George Shanthikumar, data science. Funding: Y. Yang acknowledges financial support from the NSFC Grants 72125004, 71821002. W. Zhou acknowledges financial support from the NSFC Grants 72192823, 71821002. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2021.04241 .
In this work, we consider the problem of optimal design of an acoustic cloak under uncertainty and develop scalable approximation and optimization methods to solve this problem. The design variable ...is taken as an infinite-dimensional spatially-varying field that represents the material property, while an additive infinite-dimensional random field represents, e.g., the variability of the material property or the manufacturing error. Discretization of this optimal design problem results in high-dimensional design variables and uncertain parameters. To solve this problem, we develop a computational approach based on a Taylor approximation and an approximate Newton method for optimization, which is based on a Hessian derived at the mean of the random field. We show our approach is scalable with respect to the dimension of both the design variables and uncertain parameters, in the sense that the necessary number of acoustic wave propagations is essentially independent of these dimensions, for numerical experiments with up to one million design variables and half a million uncertain parameters. We demonstrate that, using our computational approach, an optimal design of the acoustic cloak that is robust to material uncertainty is achieved in a tractable manner. The optimal design under uncertainty problem is posed and solved for the classical circular obstacle surrounded by a ring-shaped cloaking region, subjected to both a single-direction single-frequency incident wave and multiple-direction multiple-frequency incident waves. Finally, we apply the method to a deterministic large-scale optimal cloaking problem with complex geometry, to demonstrate that the approximate Newton method's Hessian computation is viable for large, complex problems.
•We propose a scalable method for stochastic optimal design of metamaterial cloaks.•We use Taylor approximations combined with an approximate Newton method.•The method is scalable with respect to both design and parameter dimensions.•We test multiple frequencies and attack directions and complex geometry.
•Off-grid DOA estimation.•Grid-based DOA estimation.•Second-order Taylor approximation.•Proportionality relationship.•Block restricted isometry property.
The problem of off-grid direction-of-arrival ...(DOA) estimation is investigated. We develop a grid-based method to jointly estimate the closest spatial frequency (the sine of DOA) grids, and the gaps between the estimated grids and the corresponding frequencies. By using a second-order Taylor approximation, the data model under the framework of joint-sparse representation is formulated. We point out an important property of the signals of interest in the model, namely the proportionality relationship, which is empirically demonstrated to be useful in the sense that it increases the probability of the mixing matrix satisfying the block restricted isometry property. Simulation examples demonstrate the effectiveness and superiority of the proposed method against several state-of-the-art grid-based approaches.
•A proportional differential control algorithm is studied.•The first-order Taylor expansion is used to approximate the time-delay of the system.•An algorithm is presented to determine the controller ...parameters.•The desired dynamic performance can be achieved.
In this paper, a proportional differential control algorithm is studied. According to the system performance indices, we uses the first-order Taylor expansion to approximate the time-delay, present an algorithm to determine the controller parameters so that the desired dynamic performance can be achieved. The numerical examples are provided to show the advantages of the proposed algorithm for small time-delay.
•A computational framework to solve PDE-constrained optimization under uncertainty.•Exploring the intrinsic dimensionality of the control objective in parameter space.•Taylor approximation based ...control variate for variance reduction.•Lagrangian formulation for eigenvalue problem constrained optimization.
In this work we develop a scalable computational framework for the solution of PDE-constrained optimal control problems under high-dimensional uncertainty. Specifically, we consider a mean-variance formulation of the control objective and employ a Taylor expansion with respect to the uncertain parameter field either to directly approximate the control objective or as a control variate for variance reduction. The expressions for the mean and variance of the Taylor approximation are known analytically, although their evaluation requires efficient computation of the trace of the (preconditioned) Hessian of the control objective. We propose to estimate this trace by solving a generalized eigenvalue problem using a randomized algorithm that only requires the action of the Hessian on a small number of random directions. Then, the computational work does not depend on the nominal dimension of the uncertain parameter, but depends only on the effective dimension (i.e., the rank of the preconditioned Hessian), thus ensuring scalability to high-dimensional problems. Moreover, to increase the estimation accuracy of the mean and variance of the control objective by the Taylor approximation, we use it as a control variate for variance reduction, which results in considerable computational savings (several orders of magnitude) compared to a plain Monte Carlo method. In summary, our approach amounts to solving an optimal control problem constrained by the original PDE, the generalized eigenvalue equations at a small number of eigenfunctions, and a set of linearized PDEs that arise from the computation of the gradient and Hessian of the control objective with respect to the uncertain parameter. We use the Lagrangian formalism to derive expressions for the gradient with respect to the control and apply a gradient-based optimization method to solve the problem. We demonstrate the accuracy, efficiency, and scalability of the proposed computational method for two examples with high-dimensional uncertain parameters: subsurface flow in a porous medium modeled as an elliptic PDE, and turbulent jet flow modeled by the Reynolds-averaged Navier–Stokes equations coupled with a nonlinear advection-diffusion equation characterizing model uncertainty. In particular, for the latter more challenging example we show scalability of our algorithm up to one million parameters resulting from discretization of the uncertain parameter field.
To characterize electromagnetic metamaterials at the level of an effective medium, nonlocal constitutive relations are required. In the most general sense, this is feasible using a response function ...that is convolved with the electric field to express the electric displacement field. Even though this is a neat concept, it bears little practical use. Therefore, frequently the response function is approximated using a polynomial function. While in the past explicit constitutive relations were derived that considered only some lowest order terms, we develop here a general framework that considers an arbitrary high number of terms. It constitutes, therefore, an approximation to the initially considered response function of arbitrary precision. The reason for the previously self-imposed restriction to only a few lowest order terms in the expansion has been the unavailability of the necessary interface conditions with which these nonlocal constitutive relations have to be equipped. Otherwise one could not make practical use of them. Therefore, besides the introduction of such higher order nonlocal constitutive relations, it is at the heart of contribution to derive the necessary interface conditions to pave the way for the practical use of these advanced material laws.
This paper deals with the problem of blind source separation (BSS), where observed signals are a mixture of delayed sources. In reference to a previous work, when the delay time is small such that ...the first‐order Taylor approximation holds, delayed observations are transformed into an instantaneous mixture of original sources and their derivatives, for which an extended second‐order blind identification (SOBI) approach is used to recover sources. Inspired by the results of this previous work, we propose to generalize its first‐order Taylor approximation to suit higher‐order approximations in the case of a large delay time based on a similar version of its extended SOBI. Compared to SOBI and its extended version for a first‐order Taylor approximation, our method is more efficient in terms of separation quality when the delay time is large. Simulation results verify the performance of our approach under different time delays and signal‐to‐noise ratio conditions, respectively.
•The time-delay term is approximated by the third-order Taylor expansion.•A special vibration model of 1.5 degrees of freedom is introduced to match the approximated third-order linear subsystem.•The ...obtained energy function of each delayed subsystem has actual physical significance and weak conservativeness.•A state-dependent switching rule with respect to both the current and delayed states is proposed.•A delay-independent unstable system can be stabilized by switching between two chosen delay values.
This paper presents a new state-dependent switching strategy for stabilization of switched time-delay systems with all subsystems being unstable. When time-delays are not small enough, the delayed subsystem can be approximated as a third-order linear delay-free system by using third-order Taylor expansion. Then, a special vibration model with a nonholonomic constraint is introduced to match the obtained third-order linear system. On this basis, the energy function of the original delayed subsystem is constructed by the sum of the kinetic and potential energies of the special vibration model. After that, a state-dependent switching rule with large energy loss in a switching loop is designed by using the energy functions of two delayed subsystems. Finally, excellent agreement is found between our analytical results and the corresponding numerical simulations.
This study derives the asset pricing model by introducing the economic activity of firms in the business cycle model which explores the expected returns of stocks and sheds light on the equity ...premium risk. Such a model follows the discrete-time optimization to come up with the asset pricing model that includes the economic activity variable. The result shows that the considerable factors affecting the rate of stock returns at a time are the rate of time preference, the firm investment at a time , the stock price, and the growth rate of private consumption at the time. Therefore, the economic activity of firms influences the expected returns on stock in a positive direction. In contrast, the growth rate of consumption has the opposite impact on the expected rate of stock returns.