Exposure correction is one of the fundamental tasks in image processing and computational photography. While various methods have been proposed, they either fail to produce visually pleasing results, ...or only work well for limited types of image (e.g., underexposed images). In this paper, we present a novel automatic exposure correction method, which is able to robustly produce high‐quality results for images of various exposure conditions (e.g., underexposed, overexposed, and partially under‐ and over‐exposed). At the core of our approach is the proposed dual illumination estimation, where we separately cast the under‐and over‐exposure correction as trivial illumination estimation of the input image and the inverted input image. By performing dual illumination estimation, we obtain two intermediate exposure correction results for the input image, with one fixes the underexposed regions and the other one restores the overexposed regions. A multi‐exposure image fusion technique is then employed to adaptively blend the visually best exposed parts in the two intermediate exposure correction images and the input image into a globally well‐exposed image. Experiments on a number of challenging images demonstrate the effectiveness of the proposed approach and its superiority over the state‐of‐the‐art methods and popular automatic exposure correction tools.
-Papillary lesions of the breast, characterized by the presence of arborescent fibrovascular cores that support epithelial proliferation, constitute a heterogeneous group of neoplasms with ...overlapping clinical manifestation and histomorphologic features, but may have divergent biological behavior. These lesions are exclusively intraductal neoplasms, although an invasive carcinoma may rarely have a predominantly papillary architecture. Although recognition of a papillary architecture is typically not challenging, the histologic distinction of these entities is not always straightforward. Historically, different terminologies and variable criteria have been proposed for a given entity by various authorities. The difficulty in classifying these lesions has been further confounded by the scarcity of data and the heterogeneity across different studies with regard to the molecular genetic characteristics of this group of lesions.
-To provide an overview focusing on the current concepts in the diagnosis and classification of papillary lesions of the breast incorporating recent molecular genetic advances.
-Data were obtained from pertinent peer-reviewed English-language literature.
-The recent evolution of molecular techniques has enhanced our knowledge of the pathogenesis of papillary carcinomas of the breast. This, along with emerging outcome studies, has led to prognosis-based reclassification of some of these entities. Additional studies focusing on the molecular signatures are needed to identify potential decision tools to further stratify these lesions with respect to prognostic significance.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, OILJ, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK, VSZLJ
The challenge of person re-identification (re-id) is to match individual images of the same person captured by different nonoverlapping camera views against significant and unknown cross-view feature ...distortion. While a large number of distance metric/ subspace learning models have been developed for re-id, the cross-view transformations they learned are view-generic and thus potentially less effective in quantifying the feature distortion inherent to each camera view. Learning view-specific feature transformations for re-id (i.e., view-specific re-id), an under-studied approach, becomes an alternative resort for this problem. In this work, we formulate a novel view-specific person re-identification framework from the feature augmentation point of view, called Camera coRrelation Aware Feature augmenTation (CRAFT). Specifically, CRAFT performs cross-view adaptation by automatically measuring camera correlation from cross-view visual data distribution and adaptively conducting feature augmentation to transform the original features into a new adaptive space. Through our augmentation framework, view-generic learning algorithms can be readily generalized to learn and optimize view-specific sub-models whilst simultaneously modelling view-generic discrimination information. Therefore, our framework not only inherits the strength of view-generic model learning but also provides an effective way to take into account view specific characteristics. Our CRAFT framework can be extended to jointly learn view-specific feature transformations for person re-id across a large network with more than two cameras, a largely under-investigated but realistic re-id setting. Additionally, we present a domain-generic deep person appearance representation which is designed particularly to be towards view invariant for facilitating cross-view adaptation by CRAFT. We conducted extensively comparative experiments to validate the superiority and advantages of our proposed framework over state-of-the-art competitors on contemporary challenging person re-id datasets.
Neural networks have dominated the research of hyperspectral image classification, attributing to the feature learning capacity of convolution operations. However, the fixed geometric structure of ...convolution kernels hinders long-range interaction between features from distant locations. In this article, we propose a novel spectral-spatial transformer network (SSTN), which consists of spatial attention and spectral association modules, to overcome the constraints of convolution kernels. Also, we design a factorized architecture search (FAS) framework that involves two independent subprocedures to determine the layer-level operation choices and block-level orders of SSTN. Unlike conventional neural architecture search (NAS) that requires a bilevel optimization of both network parameters and architecture settings, the FAS focuses only on finding out optimal architecture settings to enable a stable and fast architecture search. Extensive experiments conducted on five popular HSI benchmarks demonstrate the versatility of SSTNs over other state-of-the-art (SOTA) methods and justify the FAS strategy. On the University of Houston dataset, SSTN obtains comparable overall accuracy to SOTA methods with a small fraction (1.2%) of multiply-and-accumulate operations compared to a strong baseline spectral-spatial residual network (SSRN). Most importantly, SSTNs outperform other SOTA networks using only 1.2% or fewer MACs of SSRNs on the Indian Pines, the Kennedy Space Center, the University of Pavia, and the Pavia Center datasets.
Objective
To assess the accuracy of dynamic computer‐assisted implant surgery.
Materials and methods
An electronic search up to March 2020 was conducted using PubMed, Embase, and the Cochrane Central ...Register of Controlled Trial to identify studies using dynamic navigation in implant surgery, and additional manual search was performed as well. Clinical trials and model studies were selected. The primary outcome was accuracy. A single‐arm meta‐analysis of continuous data was conducted. Meta‐regression was utilized for comparison on study design, guidance method, jaw, and systems.
Results
Ten studies, four randomized controlled trials (RCT) and six prospective studies, met the inclusion criteria. A total of 1,298 drillings and implants were evaluated. The meta‐analysis of the accuracy (five clinical trials and five model studies) revealed average global platform deviation, global apex deviation, and angular deviation were 1.02 mm, 95% CI (0.83, 1.21), 1.33 mm, 95% CI (0.98, 1.67), and 3.59°, 95% CI (2.09, 5.09). Meta‐regression shown no difference between model studies and clinical trials (p = .295, 0.336, 0.185), drilling holes and implant (p = .36, 0.279, 0.695), maxilla and mandible (p = .875, 0.632, 0.281), and five different systems (p = .762, 0.342, 0.336).
Conclusion
Accuracy of dynamic computer‐aided implant surgery reaches a clinically acceptable range and has potential in clinical usage, but more patient‐centered outcomes and socio‐economic benefits should be reported.
White matter hyperintensities (WMH) are commonly found in the brains of healthy elderly individuals and have been associated with various neurological and geriatric disorders. In this paper, we ...present a study using deep fully convolutional network and ensemble models to automatically detect such WMH using fluid attenuation inversion recovery (FLAIR) and T1 magnetic resonance (MR) scans. The algorithm was evaluated and ranked 1st in the WMH Segmentation Challenge at MICCAI 2017. In the evaluation stage, the implementation of the algorithm was submitted to the challenge organizers, who then independently tested it on a hidden set of 110 cases from 5 scanners. Averaged dice score, precision and robust Hausdorff distance obtained on held-out test datasets were 80%, 84% and 6.30 mm respectively. These were the highest achieved in the challenge, suggesting the proposed method is the state-of-the-art. Detailed descriptions and quantitative analysis on key components of the system were provided. Furthermore, a study of cross-scanner evaluation is presented to discuss how the combination of modalities affect the generalization capability of the system. The adaptability of the system to different scanners and protocols is also investigated. A quantitative study is further presented to show the effect of ensemble size and the effectiveness of the ensemble model. Additionally, software and models of our method are made publicly available. The effectiveness and generalization capability of the proposed system show its potential for real-world clinical practice.
•Describe the design, methodology, implementation details of our winning method for WMH Segmentation Challenge at MICCAI 2017.•Present an evaluation on both the public training set and the held-out test sets, and compare to other participating methods.•Present a cross-scanner evaluation on the generalization capability of the system.•Present a quantitative and a statistical study on ensemble models to test the effect of ensemble size and each element.
Chemical synthesis of insulin superfamily proteins (ISPs) has recently been widely studied to develop next‐generation drugs. Separate synthesis of multiple peptide fragments and tedious ...chain‐to‐chain folding are usually encountered in these studies, limiting accessibility to ISP derivatives. Here we report the finding that insulin superfamily proteins (e.g. H2 relaxin, insulin itself, and H3 relaxin) incorporating a pre‐made diaminodiacid bridge at A‐B chain terminal disulfide can be easily and rapidly synthesized by a single‐shot automated solid‐phase synthesis and expedient one‐step folding. Our new H2 relaxin analogues exhibit almost identical structures and activities when compared to their natural counterparts. This new synthetic strategy will expediate production of new ISP analogues for pharmaceutical studies.
The insulin family proteins (e.g. H2 relaxin, insulin itself, and H3 relaxin) incorporating a pre‐made diaminodiacid bridge (DADA) at A‐B chain terminal disulfide can be very easily and rapidly synthesized by a single‐shot automated solid‐phase synthesis. These new insulin analogues exhibit almost identical structures and activities when compared to their natural counterparts.
Dielectric superstrates have been commonly utilized in tightly coupled dipole arrays (TCDAs) to help achieve better impedance matching over wide bandwidth and large scan volume. However, the ...dielectric slabs inevitably increase the total weight and fabrication complexity for TCDAs. Thus, an investigation of a new TCDA element excluding additional dielectric superstrates is proposed in this letter. Specifically, the tightly coupled elements in the proposed array are open folded dipoles, in which the gaps provide additional capacitance as compared to normal dipoles, thus resulting in better impedance matching. The proposed antenna array achieves 7.33:1 bandwidth (0.3-2.2 GHz) while scanning up to ±70° in the E-/D-plane and ±50° in the H-plane, subject to VSWR < 3.0. A prototype for an 8 × 8 array was fabricated and measured. Good agreement is achieved between measured and simulation results, thus validating the good performance of the proposed array.
Solving the problem of matching people across non-overlapping multi-camera views, known as person re-identification (re-id), has received increasing interests in computer vision. In a real-world ...application scenario, a watch-list (gallery set) of a handful of known target people are provided with very few (in many cases only a single) image(s) (shots) per target. Existing re-id methods are largely unsuitable to address this open-world re-id challenge because they are designed for (1) a closed-world scenario where the gallery and probe sets are assumed to contain exactly the same people, (2) person-wise identification whereby the model attempts to verify exhaustively against each individual in the gallery set, and (3) learning a matching model using multi-shots. In this paper, a novel transfer local relative distance comparison (t-LRDC) model is formulated to address the open-world person re-identification problem by one-shot group-based verification. The model is designed to mine and transfer useful information from a labelled open-world non-target dataset. Extensive experiments demonstrate that the proposed approach outperforms both non-transfer learning and existing transfer learning based re-id methods.