A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. ...The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Recent pose invariant methods try to model the subject specific appearance change across pose. For this, however, almost all of the existing methods require a perfect alignment between a gallery and ...a probe image. In this paper we present a pose invariant face recognition method that does not require the facial landmarks to be detected as such and is able to work with only single training image of the subject. We propose novel extensions by introducing to use a more robust feature description as opposed to pixel-based appearances. Using such features we put forward to synthesize the non-frontal views to frontal. Furthermore, using local kernel density estimation, instead of commonly used normal density assumption, is suggested to derive the prior models. Our method does not require any strict alignment between gallery and probe images which makes it particularly attractive as compared to the existing state of the art methods. Improved recognition across a wide range of poses has been achieved using these extensions.
There has been an increasing number of studies on the potential effects of land-use change on the carbon (C) balance. However, few of these studies have focused on arid regions. Cropland in Xinjiang, ...a typical arid region in China, has expanded dramatically over the last 40 years. This study applied the Carbon Bookkeeping Model to estimate the changes in C stocks resulting from cropland expansion in Xinjiang from 1975 to 2015. The results showed that the area of cropland increased by a factor of ∼1.6. This increase was driven by advancements in agricultural technology and favorable agricultural policies. The increase in cropland area of 2.03 Mha (M = 10
6
) was the result of the clearing of ∼4.09 Mha land for cropland and the conversion of 2.06 Mha cropland to other land cover types. The expansion in cropland resulted in substantial sequestration of C, with that in Xinjiang amounting to 94.24 Tg C (1Tg = 10
12
g), accounting for 1.4% of the regional C stocks. Land clearing for cropland (LCC) had the greatest contribution to C sequestration in Xinjiang. The rate of increase in C density through LCC was 0.61 Mg C ha
−1
a
−1
and 1.54 Mg C ha
−1
a
−1
from 1975 to 2004 and 2005 to 2015, respectively. C sequestration due to cropland loss (CLO) of 29.40 Tg C was attributed to the expansion of built-up land and afforestation. Sustainable agricultural activities represented by large-scale clearing for cropland were a major C sink in Xinjiang. Therefore, sustainable management of cropland is essential for maintaining a high C density and preventing loss of C to the atmosphere through cropland abandonment in the future.
Although it is generally assumed that herbivores have more voluminous body cavities due to larger digestive tracts required for the digestion of plant fiber, this concept has not been addressed ...quantitatively. We estimated the volume of the torso in 126 terrestrial tetrapods (synapsids including basal synapsids and mammals, and diapsids including birds, non‐avian dinosaurs and reptiles) classified as either herbivore or carnivore in digital models of mounted skeletons, using the convex hull method. The difference in relative torso volume between diet types was significant in mammals, where relative torso volumes of herbivores were about twice as large as that of carnivores, supporting the general hypothesis. However, this effect was not evident in diapsids. This may either reflect the difficulty to reliably reconstruct mounted skeletons in non‐avian dinosaurs, or a fundamental difference in the bauplan of different groups of tetrapods, for example due to differences in respiratory anatomy. Evidently, the condition in mammals should not be automatically assumed in other, including more basal, tetrapod lineages. In both synapsids and diapsids, large animals showed a high degree of divergence with respect to the proportion of their convex hull directly supported by bone, with animals like elephants or Triceratops having a low proportion, and animals such as rhinoceros having a high proportion of bony support. The relevance of this difference remains to be further investigated.
Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards ...the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community.
The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based) graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases.
The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function.
The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR)-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images.
The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1224798882787923.
With the increasing importance of monitoring urban areas, the question arises which sensors are best suited to solve the corresponding challenges. This letter proposes novel node tests within the ...random forest (RF) framework, which allows them to apply them to optical RGB images, hyperspectral images, and light detection and ranging (LiDAR) data, either individually or in combination. This does not only allow to derive accurate classification results for many relevant urban classes without preprocessing or feature extraction but also provides insights into which sensor offers the most meaningful data to solve the given classification task. The achieved results on a public benchmark data set are superior to results obtained by deep learning approaches despite being based on only a fraction of training samples.
The reconstruction of the 3D geometry of a scene based on image sequences has been a very active field of research for decades. Nevertheless, there are still existing challenges in particular for ...homogeneous parts of objects. This paper proposes a solution to enhance the 3D reconstruction of weakly-textured surfaces by using standard cameras as well as a standard multi-view stereo pipeline. The underlying idea of the proposed method is based on improving the signal-to-noise ratio in weakly-textured regions while adaptively amplifying the local contrast to make better use of the limited numerical range in 8-bit images. Based on this premise, multiple shots per viewpoint are used to suppress statistically uncorrelated noise and enhance low-contrast texture. By only changing the image acquisition and adding a preprocessing step, a tremendous increase of up to 300% in completeness of the 3D reconstruction is achieved.
The typical processing chain for pixel-wise classification from PolSAR images starts with an optional preprocessing step (e.g. speckle reduction), continues with extracting features projecting the ...complex-valued data into the real domain (e.g. by polarimetric decompositions) which are then used as input for a machine-learning based classifier, and ends in an optional postprocessing (e.g. label smoothing). The extracted features are usually hand-crafted as well as preselected and represent (a somewhat arbitrary) projection from the complex to the real domain in order to fit the requirements of standard machine-learning approaches such as Support Vector Machines or Artificial Neural Networks. This paper proposes to adapt the internal node tests of Random Forests to work directly on the complex-valued PolSAR data, which makes any explicit feature extraction obsolete. This approach leads to a classification framework with a significantly decreased computation time and memory footprint since no image features have to be computed and stored beforehand. The experimental results on one fully-polarimetric and one dual-polarimetric dataset show that, despite the simpler approach, accuracy can be maintained (decreased by only less than 2% for the fully-polarimetric dataset) or even improved (increased by roughly 9% for the dual-polarimetric dataset).
•New proposed pipeline of pre-processing step and CNN for retinal vessels segmentation.•A deep strided-CNN is proposed to better and faster segment retinal vessels.•Skip connections are used to ...generate sharper and well segmented vessels.•The selected loss function fulfils the problem requirement and the nature of the dataset.
In this paper, a deep convolutional neural network (CNN) is proposed for accurate segmentation of retinal blood vessels. This method plays a significant role in observing many eye diseases. A strided-CNN model is proposed for accurate segmentation of retinal vessels, especially the tiny vessels. The model is a fully convolutional model consisting of an encoder part and a decoder part where the pooling layers are replaced with strided convolutional layers. The strided convolutional layer approach was chosen over the pooling layers approach as the former can be trained. The morphological mappings along with the Principal Component Analysis (PCA)- based pre-processing steps are used to generate contrast images for training dataset. Skip connections are implemented to concatenate features from the encoder part and the decoder part to enhance the vessels segmentation especially the tiny vessels and to make the vessel’s edges sharper. We used a class balancing loss function to train and optimize the proposed model to improve vessel image quality. The impact of the proposed segmentation method is evaluated on four databases namely DRIVE, STARE, CHASE-DB1 and HRF. Overall model performance, particularly with respect to tiny vessels, is primarily influenced by sensitivity and accuracy metrics. We demonstrate that our model outperforms other models with a sensitivity of 0.87, 0.808, 0.886 and 0.829 on DRIVE, STARE, CHASE_DB1 and HRF respectively, along with respective accuracies of 0.956, 0.954, 0.976 and 0.962.
Iterative Bilateral Filtering of Polarimetric SAR Data D'Hondt, Olivier; Guillaso, Stephane; Hellwich, Olaf
IEEE journal of selected topics in applied earth observations and remote sensing,
06/2013, Letnik:
6, Številka:
3
Journal Article
Recenzirano
Odprti dostop
In this paper, we introduce an iterative speckle filtering method for polarimetric SAR (PolSAR) images based on the bilateral filter. To locally adapt to the spatial structure of images, this filter ...relies on pixel similarities in both spatial and radiometric domains. To deal with polarimetric data, we study the use of similarities based on a statistical distance called Kullback-Leibler divergence as well as two geodesic distances on Riemannian manifolds. To cope with speckle, we propose to progressively refine the result thanks to an iterative scheme. Experiments are run over synthetic and experimental data. First, simulations are generated to study the effects of filtering parameters in terms of polarimetric reconstruction error, edge preservation and smoothing of homogeneous areas. Comparison with other methods shows that our approach compares well to other state of the art methods in the extraction of polarimetric information and shows superior performance for edge restoration and noise smoothing. The filter is then applied to experimental data sets from ESAR and FSAR sensors (DLR) at L-band and S-band, respectively. These last experiments show the ability of the filter to restore structures such as buildings and roads and to preserve boundaries between regions while achieving a high amount of smoothing in homogeneous areas.