While image alignment has been studied in different areas of computer vision for decades, aligning images depicting different scenes remains a challenging problem. Analogous to optical flow, where an ...image is aligned to its temporally adjacent frame, we propose SIFT flow, a method to align an image to its nearest neighbors in a large image corpus containing a variety of scenes. The SIFT flow algorithm consists of matching densely sampled, pixelwise SIFT features between two images while preserving spatial discontinuities. The SIFT features allow robust matching across different scene/object appearances, whereas the discontinuity-preserving spatial model allows matching of objects located at different parts of the scene. Experiments show that the proposed approach robustly aligns complex scene pairs containing significant spatial differences. Based on SIFT flow, we propose an alignment-based large database framework for image analysis and synthesis, where image information is transferred from the nearest neighbors to a query image according to the dense scene correspondence. This framework is demonstrated through concrete applications such as motion field prediction from a single image, motion synthesis via object transfer, satellite image registration, and face recognition.
On Bayesian Adaptive Video Super Resolution Liu, Ce; Sun, Deqing
IEEE transactions on pattern analysis and machine intelligence,
02/2014, Letnik:
36, Številka:
2
Journal Article
Recenzirano
Odprti dostop
Although multiframe super resolution has been extensively studied in past decades, super resolving real-world video sequences still remains challenging. In existing systems, either the motion models ...are oversimplified or important factors such as blur kernel and noise level are assumed to be known. Such models cannot capture the intrinsic characteristics that may differ from one sequence to another. In this paper, we propose a Bayesian approach to adaptive video super resolution via simultaneously estimating underlying motion, blur kernel, and noise level while reconstructing the original high-resolution frames. As a result, our system not only produces very promising super resolution results outperforming the state of the art, but also adapts to a variety of noise levels and blur kernels. To further analyze the effect of noise and blur kernel, we perform a two-step analysis using the Cramer-Rao bounds. We study how blur kernel and noise influence motion estimation with aliasing signals, how noise affects super resolution with perfect motion, and finally how blur kernel and noise influence super resolution with unknown motion. Our analysis results confirm empirical observations, in particular that an intermediate size blur kernel achieves the optimal image reconstruction results.
We describe a technique that automatically generates plausible depth maps from videos using non-parametric depth sampling. We demonstrate our technique in cases where past methods fail ...(non-translating cameras and dynamic scenes). Our technique is applicable to single images as well as videos. For videos, we use local motion cues to improve the inferred depth maps, while optical flow is used to ensure temporal depth consistency. For training and evaluation, we use a Kinect-based system to collect a large data set containing stereoscopic videos with known depths. We show that our depth estimation technique outperforms the state-of-the-art on benchmark databases. Our technique can be used to automatically convert a monoscopic video into stereo for 3D visualization, and we demonstrate this through a variety of visually pleasing results for indoor and outdoor scenes, including results from the feature film Charade.
Image denoising algorithms often assume an additive white Gaussian noise (AWGN) process that is independent of the actual RGB values. Such approaches cannot effectively remove color noise produced by ...today's CCD digital camera. In this paper, we propose a unified framework for two tasks: automatic estimation and removal of color noise from a single image using piecewise smooth image models. We introduce the noise level function (NLF), which is a continuous function describing the noise level as a function of image brightness. We then estimate an upper bound of the real NLF by fitting a lower envelope to the standard deviations of per-segment image variances. For denoising, the chrominance of color noise is significantly removed by projecting pixel values onto a line fit to the RGB values in each segment. Then, a Gaussian conditional random field (GCRF) is constructed to obtain the underlying clean image from the noisy input. Extensive experiments are conducted to test the proposed algorithm, which is shown to outperform state-of-the-art denoising algorithms.
Nonparametric Scene Parsing via Label Transfer Ce Liu; Yuen, J.; Torralba, A.
IEEE transactions on pattern analysis and machine intelligence,
12/2011, Letnik:
33, Številka:
12
Journal Article
Recenzirano
While there has been a lot of recent work on object recognition and image understanding, the focus has been on carefully establishing mathematical models for images, scenes, and objects. In this ...paper, we propose a novel, nonparametric approach for object recognition and scene parsing using a new technology we name label transfer. For an input image, our system first retrieves its nearest neighbors from a large database containing fully annotated images. Then, the system establishes dense correspondences between the input image and each of the nearest neighbors using the dense SIFT flow algorithm 28, which aligns two images based on local image structures. Finally, based on the dense scene correspondences obtained from SIFT flow, our system warps the existing annotations and integrates multiple cues in a Markov random field framework to segment and recognize the query image. Promising experimental results have been achieved by our nonparametric scene parsing system on challenging databases. Compared to existing object recognition approaches that require training classifiers or appearance models for each object category, our system is easy to implement, has few parameters, and embeds contextual information naturally in the retrieval/alignment procedure.
•We assessed the global burden trend of T2D attributed to PM2.5 over the past 30 years.•The global T2D burden trend is increasing but decreasing in regions with high SDI.•PM2.5 remains a significant ...risk factor for the T2D burden globally.
Long-term exposure to fine particulate matter (PM2.5) is associated with an increased risk of type 2 diabetes (T2D). However, limited data on trends in the global burden of T2D attributed to PM2.5, particularly in different regions by social-economic levels. We evaluated the spatio-temporal changes in the disease burden of T2D attributed to PM2.5 from 1990 to 2019 in 204 countries and regions with different socio-demographic indexes (SDI).
This is a retrospective analysis with data from the Global Burden of Disease Study 2019 (GBD2019) database. The burden of T2D attributed to PM2.5, age-standardized mortality rate (ASMR) and age-standardized disability-adjusted life year rate (ASDR) were estimated according to sex, age, nationality and SDI. The annual percentage change (APCs) and the average annual percentage change (AAPCs) were calculated by using the Joinpoint model to evaluate the changing trend of ASMR and ASDR attributed to PM2.5 from 1990 to 2019. The Gaussian process regression model was used to estimate the relationship of SDI with ASMR and ASDR.
Overall, the global burden of T2D attributable to PM2.5 increased significantly since 1990, particularly in the elderly, men, Africa, Asia and low-middle SDI regions. The ASMR and ASDR of T2D attributable to PM2.5 in 2019 were 2.47 (95% CI: 1.71, 3.24) per 100,000 population and 108.98 (95% CI: 74.06, 147.23) per 100,000 population, respectively. From 1990 to 2019, the global ASMR and ASDR of T2D attributed to T2D increased by 57.32% and 86.75%, respectively. The global AAPCs of ASMR and ASDR were 1.57 (95% CI: 1.46, 1.68) and 2.17 (95% CI: 2.02, 2.32), respectively. Declining trends were observed in North America, South America, Europe, Australia, and other regions with high SDI.
Over this 30-years study, the global T2D burden attributable to PM2.5 has increased particularly in regions with low-middle SDI. PM2.5 remains a great concern on the global burden of diabetes.
We present a unified computational approach for taking photos through reflecting or occluding elements such as windows and fences. Rather than capturing a single image, we instruct the user to take a ...short image sequence while slightly moving the camera. Differences that often exist in the relative position of the background and the obstructing elements from the camera allow us to separate them based on their motions, and to recover the desired background scene as if the visual obstructions were not there. We show results on controlled experiments and many real and practical scenarios, including shooting through reflections, fences, and raindrop-covered windows.
The flamelet-generated manifolds (FGM) method was adopted in this study to consider the preferential diffusion in a high-hydrogen micro-mixing model burner. That is, when solving the FGM flamelet, ...accurate diffusion rate was obtained from two methods: multicomponent formulation and constant detailed Lewis numbers assumption. Then a new method of filling the thermochemical state and the source term in the mixture fraction and the process variable space also was proposed, namely the linear triangular dissection interpolation method, to predict the position of the hydrogen-rich micro-mixing flame front. Compared with the Fluent approach to establish the diffusion FGM flamelet, the results showed that the two FGMs have similar flame predictions in high hydrogen content fuels, and both can accurately capture the location of the internal and external shear layer boundaries of the micro-mixing multi-jet flame in the steady state, while the Fluent approach based on the uniform Lewis number assumption predicts results that deviated significantly from the experimental results. However, for the internal shear layer, both methods have large predicted OH gradients compared to the experimental results due to the lack of effective Lewis number correction for the control variable transport equation. The results using linear triangular dissection interpolation maybe superior to the method with linear interpolation of the process variable quenching boundary toward zero, which leads to flashback due to overestimation of the process variable source term in the region below the diffusion FGM quenching boundary.
Display omitted
•Flamelet-generated manifolds method was adopted for simulation.•The high-hydrogen preferential diffusion was partial considered.•A new interpolation method was used during flamelet generation.•A huge difference appears with or without detailed Lewis number.•Assuming detailed Lewis numbers can simplify the diffusion calculation.
Fufang Xiling Jiedu capsule (FXJC), a traditional Chinese medicine that evolved from “Yinqiao Powder”, is widely used for the treatment of cold and influenza. However, due to a lack of in vivo ...metabolism research, the chemical components responsible for the therapeutic effects still remain unclear. Hence, this study aimed to describe the metabolic profiles of the FXJC in rat plasma, urine, and feces. A combined data mining strategy based on ultra‐performance liquid chromatography coupled with quadrupole time‐of‐flight tandem mass spectrometry was employed and 201 xenobiotics, including 117 prototype components and 84 metabolites were detected. Phenolic acids, flavonoids, triterpenes, and lignans were prominent ingredients absorbed in vivo, and the major metabolic pathways of the detected metabolites were glucuronidation, sulfation, methylation, and oxidation. This is the first systematic study on the metabolism of the FXJC in vivo, providing valuable information for future studies on the efficacy, toxicity, and mechanism of the FXJC.
To quickly synthesize complex scenes, digital artists often collage together visual elements from multiple sources: for example, mountains from New Zealand behind a Scottish castle with wisps of ...Saharan sand in front. In this paper, we propose to use a similar process in order to parse a scene. We model a scene as a collage of warped, layered objects sampled from labeled, reference images. Each object is related to the rest by a set of support constraints. Scene parsing is achieved through analysis-by-synthesis. Starting with a dataset of labeled exemplar scenes, we retrieve a dictionary of candidate object segments that match a query image. We then combine elements of this set into a "scene collage" that explains the query image. Beyond just assigning object labels to pixels, scene collaging produces a lot more information such as the number of each type of object in the scene, how they support one another, the ordinal depth of each object, and, to some degree, occluded content. We exploit this representation for several applications: image editing, random scene synthesis, and image-to-anaglyph.