Light fields are image-based representations that use densely sampled rays as a scene description. In this paper, we explore geometric structures of 3D lines in ray space for improving light field ...triangulation and stereo matching. The triangulation problem aims to fill in the ray space with continuous and non-overlapping simplices anchored at sampled points (rays). Such a triangulation provides a piecewise-linear interpolant useful for light field super-resolution. We show that the light field space is largely bilinear due to 3D line segments in the scene, and direct triangulation of these bilinear subspaces leads to large errors. We instead present a simple but effective algorithm to first map bilinear subspaces to line constraints and then apply Constrained Delaunay Triangulation (CDT). Based on our analysis, we further develop a novel line-assisted graph-cut (LAGC) algorithm that effectively encodes 3D line constraints into light field stereo matching. Experiments on synthetic and real data show that both our triangulation and LAGC algorithms outperform state-of-the-art solutions in accuracy and visual quality.
Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for ...large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons.
To achieve more complete and more uniformly highlighted salient object regions, this study presents a computational saliency enhancement model that incorporates the properties of multi-scale and ...logarithmic response into the local and global contrasts. A distinct feature of the authors model is a novel saliency enhancement operator. This operator can effectively enhance the saliency of object interior regions while simultaneously reducing blur on object boundaries caused by multiple scales. Their model is a general one that can make flexible tradeoffs between precision and recall. Detailed comparisons with 12 state-of-the-art methods show that their method can obtain satisfactory salient object regions that are closer to the human-labelled results. In addition, their method provides superior results in precision–recall, F-measure and mean absolute error.
As high performance clusters continue to grow in size and popularity, issues of fault
tolerance and reliability are becoming limiting factors on application scalability.
To address these issues, we ...present the design and implementation of a system for
providing coordinated checkpointing and rollback recovery for MPI-based parallel
applications. Our approach integrates the Berkeley Lab BLCR kernel-level process
checkpoint system with the LAM implementation of MPI through a defined
checkpoint/restart interface. Checkpointing is transparent to the application,
allowing the system to be used for cluster maintenance and scheduling reasons as
well as for fault tolerance. Experimental results show negligible communication
performance impact due to the incorporation of the checkpoint support capabilities
into LAM/MPI.
Evaluating the response of a linear shift-invariant system is a problem that occurs frequently in a wide variety of science and engineering problems. Calculating the system response via a convolution ...may be done efficiently with Fourier transforms. When one must compute the response of one system to m input signals, or the response of m systems to one signal, it may be the case that one may approximate all system responses without having to compute all m Fourier transforms. This can lead to substantial computational savings. Rather than process each point individually, one may only process basis vectors that span the output data space. However, to get a low-error approximation, it is necessary that the output vectors have low numerical rank if they were assembled into a matrix. We develop theory that shows how the singular value decay of a matrix ΦA that is a product of a convolution operator Φ and an arbitrary matrix A depends in a linear fashion on the singular value decays of Φ and A. We propose gap-rank, a measure of the relative numerical rank of a matrix. We show that convolution cannot destroy the numerical low-rank-ness of ΦA data with only modest assumptions. We then develop a new method that exploits low-rank problems with block Golub–Kahan iteration in a Krylov subspace to approximate the low-rank problem. Our method can exploit parallelism in both the individual convolutions and the linear algebra operations in the block Golub–Kahan algorithm. We present numerical examples from signal and image processing that show the low error and scalability of our method.
Learning based depth estimation from light field has made significant progresses in recent years. However, most existing approaches are under the supervised framework, which requires vast quantities ...of ground-truth depth data for training. Furthermore, accurate depth maps of light field are hardly available except for a few synthetic datasets. In this paper, we exploit the multi-orientation epipolar geometry of light field and propose an unsupervised monocular depth estimation network. It predicts depth from the central view of light field without any ground-truth information. Inspired by the inherent depth cues and geometry constraints of light field, we then introduce three novel unsupervised loss functions: photometric loss, defocus loss and symmetry loss. We have evaluated our method on a public 4D light field synthetic dataset. As the first unsupervised method published in the 4D Light Field Benchmark website, our method can achieve satisfactory performance in most error metrics. Comparison experiments with two state-of-the-art unsupervised methods demonstrate the superiority of our method. We also prove the effectiveness and generality of our method on real-world light-field images.
As with many imaging tasks, disparity estimation for light fields seems to be well-matched to machine learning approaches. Neural network-based methods can achieve an overall bad pixel rate as low as ...four percent on the 4D light field benchmark dataset, continued effort to improve accuracy is resulting in diminishing returns. On the other hand, due to the growing importance of mobile and embedded devices, improving the efficiency is emerging as an important problem. In this paper, we improve the efficiency of existing neural net approaches for light field disparity estimation by introducing efficient network blocks, pruning redundant sections of the network and downsampling the resolution of feature vector. To improve performance, we also propose densely sampled epipolar image plane volumes as input. Experiment results show that our approach can achieve similar results compared with state-of-the-art methods while using only one-tenth runtime.