Non-Euclidean geometry, discovered by negating Euclid's parallel postulate, has been of considerable interest in mathematics and related fields for the description of geographical coordinates, ...Internet infrastructures, and the general theory of relativity. Notably, an infinite number of regular tessellations in hyperbolic geometry-hyperbolic lattices-are expected to extend Euclidean Bravais lattices and the consequent wave phenomena to non-Euclidean geometry. However, topological states of matter in hyperbolic lattices have yet to be reported. Here we investigate topological phenomena in hyperbolic geometry, exploring how the quantized curvature and edge dominance of the geometry affect topological phases. We report a recipe for the construction of a Euclidean photonic platform that inherits the topological band properties of a hyperbolic lattice under a uniform, pseudospin-dependent magnetic field, realizing a non-Euclidean analog of the quantum spin Hall effect. For hyperbolic lattices with different quantized curvatures, we examine the topological protection of helical edge states and generalize Hofstadter's butterfly, by employing two empirical parameters that measure the edge confinement and defect immunity. We demonstrate that the proposed platforms exhibit the unique spectral-magnetic sensitivity of topological immunity in highly curved hyperbolic planes. Our approach is applicable to general non-Euclidean geometry and enables the exploitation of infinite lattice degrees of freedom for band theory.
Deep learning (DL) based methods have swept the field of mechanical fault diagnosis, because of the powerful ability of feature representation. However, many of existing DL methods fail in ...relationship mining between signals explicitly. Unlike those deep neural networks, graph convolutional networks (GCNs) taking graph data with topological structure as input is more efficient for data relationship mining, making GCN to be powerful for feature representation from graph data in non-Euclidean space. Nevertheless, existing GCNs have two limitations. First, most GCNs are constructed on unweighted graphs, considering importance of neighbors as the same, which is not in line with reality. Second, the receptive field of GCNs is fixed, which limits the effectiveness of GCNs for feature representation. To address these issues, a multireceptive field graph convolutional network (MRF-GCN) is proposed for effective intelligent fault diagnosis. In MRF-GCN, data samples are converted into weighted graphs to indicate differences in relationship of data samples. Moreover, MRF-GCN learns not only features from different receptive field, but also fuses learned features as an enhanced feature representation. To verify the efficacy of MRF-GCN for machine fault diagnosis, case studies are implemented, and the results show that MRF-GCN can achieve superior performance even under imbalanced dataset.
We establish a sharp affineLp Sobolev trace inequality by using the Lp Busemann–Petty centroid inequality. For p=2, our affine version is stronger than the famous sharp L2 Sobolev trace inequality ...proved independently by Escobar and Beckner. Our approach allows also to characterize all extremizers in this case. For this new inequality, no Euclidean geometric structure is needed.
A
bstract
We consider two dimensional CFT states that are produced by a gravitational path integral.
As a first case, we consider a state produced by Euclidean AdS
2
evolution followed by flat space ...evolution. We use the fine grained entropy formula to explore the nature of the state. We find that the naive hyperbolic space geometry leads to a paradox. This is solved if we include a geometry that connects the bra with the ket, a bra-ket wormhole. The semiclassical Lorentzian interpretation leads to CFT state entangled with an expanding and collapsing Friedmann cosmology.
As a second case, we consider a state produced by Lorentzian dS
2
evolution, again followed by flat space evolution. The most naive geometry also leads to a similar paradox. We explore several possible bra-ket wormholes. The most obvious one leads to a badly divergent temperature. The most promising one also leads to a divergent temperature but by making a projection onto low energy states we find that it has features that look similar to the previous Euclidean case. In particular, the maximum entropy of an interval in the future is set by the de Sitter entropy.
Clustering non-Euclidean data is difficult, and one of the most used algorithms besides hierarchical clustering is the popular algorithm Partitioning Around Medoids (PAM), also simply referred to as ...k-medoids clustering. In Euclidean geometry the mean – as used in k-means – is a good estimator for the cluster center, but this does not exist for arbitrary dissimilarities. PAM uses the medoid instead, the object with the smallest dissimilarity to all others in the cluster. This notion of centrality can be used with any (dis-)similarity, and thus is of high relevance to many domains and applications. A key issue with PAM is its high run time cost. We propose modifications to the PAM algorithm that achieve an O(k)-fold speedup in the second (“SWAP”) phase of the algorithm, but will still find the same results as the original PAM algorithm. If we relax the choice of swaps performed (while retaining comparable quality), we can further accelerate the algorithm by eagerly performing additional swaps in each iteration. With the substantially faster SWAP, we can now explore faster initialization strategies, because (i) the classic (“BUILD”) initialization now becomes the bottleneck, and (ii) our swap is fast enough to compensate for worse starting conditions. We also show how the CLARA and CLARANS algorithms benefit from the proposed modifications. While we do not study the parallelization of our approach in this work, it can easily be combined with earlier approaches to use PAM and CLARA on big data (some of which use PAM as a subroutine, hence can immediately benefit from these improvements), where the performance with high k becomes increasingly important. In experiments on real data with k=100,200, we observed a 458× respectively 1191× speedup compared to the original PAM SWAP algorithm, making PAM applicable to larger data sets, and in particular to higher k.
•Faster k-Medoids (PAM) clustering algorithm.•Scalable for large number of clusters (large k).•Sampling-based approximations for large data sets (large n).•Same quality as previous state-of-the-art techniques (PAM).•Included in popular clustering tools such as ELKI and R.
We propose an end-to-end place recognition model based on a novel deep neural network. First, we propose to exploit the spatial pyramid structure of the images to enhance the vector of locally ...aggregated descriptors (VLAD) such that the enhanced VLAD features can reflect the structural information of the images. To encode this feature extraction into the deep learning method, we build a spatial pyramid-enhanced VLAD (SPE-VLAD) layer. Next, we impose weight constraints on the terms of the traditional triplet loss (T-loss) function such that the weighted T-loss (WT-loss) function avoids the suboptimal convergence of the learning process. The loss function can work well under weakly supervised scenarios in that it determines the semantically positive and negative samples of each query through not only the GPS tags but also the Euclidean distance between the image representations. The SPE-VLAD layer and the WT-loss layer are integrated with the VGG-16 network or ResNet-18 network to form a novel end-to-end deep neural network that can be easily trained via the standard backpropagation method. We conduct experiments on three benchmark data sets, and the results demonstrate that the proposed model defeats the state-of-the-art deep learning approaches applied to place recognition.
We present a representation formula for translating soliton surfaces to the mean curvature flow in Euclidean space
${\mathbb {R}}^{4}$
and give examples of conformal parameterisations for translating ...soliton surfaces.
A
bstract
We propose an optimization procedure for Euclidean path-integrals that evaluate CFT wave functionals in arbitrary dimensions. The optimization is performed by minimizing certain functional, ...which can be interpreted as a measure of computational complexity, with respect to background metrics for the path-integrals. In two dimensional CFTs, this functional is given by the Liouville action. We also formulate the optimization for higher dimensional CFTs and, in various examples, find that the optimized hyperbolic metrics coincide with the time slices of expected gravity duals. Moreover, if we optimize a reduced density matrix, the geometry becomes two copies of the entanglement wedge and reproduces the holographic entanglement entropy. Our approach resembles a continuous tensor network renormalization and provides a concrete realization of the proposed interpretation of AdS/CFT as tensor networks. The present paper is an extended version of our earlier report
arXiv:1703.00456
and includes many new results such as evaluations of complexity functionals, energy stress tensor, higher dimensional extensions and time evolutions of thermofield double states.
In recent work J. Feldbrugge et al. Phys. Rev. D 95, 103508 (2017). and J. Feldbrugge et al. Phys. Rev. Lett. 119, 171301 (2017)., we introduced Picard-Lefschetz theory as a tool for defining the ...Lorentzian path integral for quantum gravity in a systematic semiclassical expansion. This formulation avoids several pitfalls occurring in the Euclidean approach. Our method provides, in particular, a more precise formulation of the Hartle-Hawking no boundary proposal, as a sum over real Lorentzian four-geometries interpolating between an initial three-geometry of zero size, i.e., a point, and a final three-geometry. With this definition, we calculated the no boundary amplitude for a closed universe with a cosmological constant, assuming cosmological symmetry for the background and including linear perturbations. We found the opposite semiclassical exponent to that obtained by Hartle and Hawking for the creation of a de Sitter spacetime “from nothing.” Furthermore, we found the linearized perturbations to be governed by an inverse Gaussian distribution, meaning they are unsuppressed and out of control. Recently, Diaz Dorronsoro et al. Phys. Rev. D 96, 043505 (2017) followed our methods but attempted to rescue the no boundary proposal by integrating the lapse over a different, intrinsically complex contour. Here, we show that, in addition to the desired Hartle-Hawking saddle point contribution, their contour yields extra, nonperturbative corrections which again render the perturbations unsuppressed. We prove there is no choice of complex contour for the lapse which avoids this problem. We extend our discussion to include backreaction in the leading semiclassical approximation, fully nonlinearly for the lowest tensor harmonic and to second order for all higher modes. Implications for quantum de Sitter spacetime and for cosmic inflation are briefly discussed.