The spectral slope of magnetohydrodynamic (MHD) turbulence varies depending on the spectral theory considered; −3/2 is the spectral slope in Kraichnan–Iroshnikov–Dobrowolny (KID) theory, −5/3 in ...Marsch–Matthaeus–Zhou and Goldreich–Sridhar theories, also called Kolmogorov-like (K-41-like) MHD theory, the combination of the −5/3 and −3/2 scales in Biskamp, and so on. A rigorous mathematical proof to any of these spectral theories is of great scientific interest. Motivated by the 2012 work of A. Biryuk and W. Craig (Physica D 241(2012) 426–438), we establish inertial range bounds for K-41-like phenomenon in MHD turbulent flow through a mathematical rigor; a range of wave numbers in which the spectral slope of MHD turbulence is proportional to −5/3 is established and the upper and lower bounds of this range are explicitly formulated. We also have shown that the Leray weak solution of the standard MHD model is bonded in the Fourier space, the spectral energy of the system is bounded and its average over time decreases in time.
Adaptive gradient descent methods such as Adam, RMSprop, and AdaGrad achieve great success in training deep learning models. These methods adaptively change the learning rates, resulting in a faster ...convergence speed. Recent studies have shown their problems include extreme learning rates, non-convergence issues, as well as poor generalization. Some enhanced variants have been proposed, such as AMSGrad, and AdaBound. However, the performances of these alternatives are controversial and some drawbacks still occur. In this work, we proposed an optimizer called AdaCB, which limits the learning rates of Adam in a convergence range bound. The bound range is determined by the LR test, and then two bound functions are designed to constrain Adam, and two bound functions tend to a constant value. To evaluate our method, we carry out experiments on the image classification task, three models including Smallnet, Network IN Network, and Resnet are trained on CIFAR10 and CIFAR100 datasets. Experimental results show that our method outperforms other optimizers on CIFAR10 and CIFAR100 datasets with accuracies of (82.76%, 53.29%), (86.24%, 60.19%), and (83.24%, 55.04%) on Smallnet, Network IN Network and Resnet, respectively. The results also indicate that our method maintains a faster learning speed, like adaptive gradient methods, in the early stage and achieves considerable accuracy, like SGD (M), at the end.
Asian options are popular path-dependent options and it has been a long-standing problem to price them efficiently and accurately. Since there is no known exact pricing formula for Asian options, ...numerical pricing formulas like lattice models must be employed. A lattice divides a certain time interval into
n time steps and the pricing results generated by the lattice (called desired option values for convenience) converge to the theoretical option value as
n
→
∞
. Since a brute-force lattice pricing algorithm runs in subexponential time in
n, some heuristics, like interpolation method, are used to strike the balance between the efficiency and the accuracy. But the pricing results might not converge due to the accumulation of interpolation errors. For pricing European-style Asian options, the evaluation on the major part of the lattice can be done by a simple formula, and the interpolation method is only required on the minor part of the lattice. Thus polynomial time algorithms with convergence guarantee for European-style Asian options can be derived. However, such a simple formula does not exist for American-style Asian options. This paper suggests an efficient range-bound algorithm that bracket the desired option value. By taking advantages of the early exercise property of American-style options, we show that part of the lattice can be evaluated by a simple formula. The interpolation method is required on the remaining part of the lattice and the upper and the lower bounds option values produced by the proposed algorithm are essentially numerically identical. Thus the theoretical option value is said to be obtained practically when the range bound algorithm runs on a lattice with large number of time steps.