•An innovative idea of sampling region and sampling angle of the light field camera is proposed.•The effects of flame radiation sampling of the different light field cameras are investigated.•The ...distributions of the sampled rays are also compared.•Reconstruction accuracy is investigated for distances from microlens array to photosensor.
Different light field cameras (i.e., traditional and focused) can be used for the flame temperature measurement. But it is crucial to investigate which light field camera can provide better reconstruction accuracy for the flame temperature. In this study, numerical simulations were carried out to investigate the reconstruction accuracy of the flame temperature for the different light field cameras. The effects of flame radiation sampling of the light field cameras were described and evaluated. A novel concept of sampling region and sampling angle of the light field camera was proposed to assess the directional accuracy of the sampled rays of each pixel on the photosensor. It has been observed that the traditional light field camera sampled more rays for each pixel, hence the sampled rays of each pixel are approached less accurately from a single direction. The representative sampled ray was defined to obtain the direction of flame radiation. The radiation intensity of each pixel was calculated and indicated that the traditional light field camera sampled less radiation information than the focused light field camera. A non-negative least square (NNLS) algorithm was used to reconstruct the flame temperature. The reconstruction accuracy was also evaluated for the different distances from microlens array (MLA) to the photosensor. The results obtained from the simulations suggested that the focused light field camera performed better in comparison to the traditional light field camera. Experiments were also carried out to reconstruct the temperature distribution of ethylene diffusion flames based on the light field imaging, and to validate the proposed model.
•We decouple the difficulties affecting the person re-identification task into the Camera-Camera (CC) problem and the Camera-Person (CP) problem.•We propose a bi-stream generative model for solving ...the CC and CP problems separately, with promising results.•We design a part-weighted loss based on the unbalanced number of human body parts in the dataset to guide the model to focus on the more important parts.
Generalizable person re-identification (re-ID) has attracted growing attention due to its powerful adaptation capability in the unseen data domain. However, existing solutions often neglect either crossing cameras (e.g., illumination and resolution differences) or pedestrian misalignments (e.g., viewpoint and pose discrepancies), which easily leads to poor generalization capability when adapted to the new domain. In this paper, we formulate these difficulties as: 1) Camera-Camera (CC) problem, which denotes the various human appearance changes caused by different cameras; 2) Camera-Person (CP) problem, which indicates the pedestrian misalignments caused by the same identity person under different camera viewpoints or changing pose. To solve the above issues, we propose a Bi-stream Generative Model (BGM) to learn the fine-grained representations fused with camera-invariant global feature and pedestrian-aligned local feature, which contains an encoding network and two stream decoding sub-network. Guided by original pedestrian images, one stream is employed to learn a camera-invariant global feature for the CC problem via filtering cross-camera interference factors. For the CP problem, another stream learns a pedestrian-aligned local feature for pedestrian alignment using information-complete densely semantically aligned part maps. Moreover, a part-weighted loss function is presented to reduce the influence of missing parts on pedestrian alignment. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods on the large-scale generalizable re-ID benchmarks, involving domain generalization setting and cross-domain setting.
Consumer RGB‐D and binocular stereo cameras were applied to fruit detection and localization. However, few studies are documented on performance comparison of newly released cameras under same scene ...in complex orchard. This study evaluates performance of consumer RGB‐D and binocular stereo cameras based on YOLOv5x for kiwifruit detection and localization and selection of optimal one with better application in complex orchard environment. Firstly, Azure Kinect, RealSense D435, and ZED 2i cameras were employed to capture images of kiwifruit canopies. Subsequently, YOLOv5x was applied to train and detect kiwifruits and calyxes in the images. Meanwhile, an overlap‐partitioning detection strategy was applied on kiwifruit and calyx detection. Additionally, spatial coordinate transformation was performed by integrating camera's extrinsic parameters and depth map generated by each camera. Finally, three‐dimensional coordinates of calyxes were calculated and compared with ground truth, followed by localization accuracy of calyxes were analyzed. Results show that YOLOv5x obtained mean average precision of 93.2%, 91.3%, and 95.8% for three cameras on kiwifruit and calyx detection, respectively. Overlap‐partitioning detection strategy improved the calyx detection and significantly increased average precision by 13.00%, 16.30%, and 7.70%, respectively. The mean absolute deviation of calyx coordinates on Y‐axis is relatively high for ZED 2i at 8.44 mm in comparison of 6.67 mm for Azure Kinect, while RealSense D435 achieved minimum of 10.42 mm on X‐axis and 18.33 mm on Z‐axis. Average spatial localization speed of calyxes in one image was 0.164 s, 0.037 s, and 0.062 s for Azure Kinect, RealSense D435, and ZED 2i, respectively. These results indicate the excellent performance of RealSense D435 than Azure Kinect and ZED 2i in kiwifruit orchard, which could be a valuable reference for other orchards to select a camera with high precision localization capacity.
Abstract This paper introduces an improvement of the ‘shake-the-box (STB)’ (Schanz, Gesemann, and Schröder, Exp. Fluids 57.5, 2016) technique using the polynomial calibration model and the ...line-of-sight constraints (LOSC) to overcome the refractive interface issues in Lagrangian particle tracking (LPT) measurement. The method (named LOSC-LPT) draws inspiration from the two-plane polynomial camera calibration in tomographic particle image velocimetry (Worth, Nickels, Thesis, 2010) and the STB-based open-source Lagrangian particle tracking (OpenLPT) framework (Tan, Salibindla, Masuk, and Ni, Exp. Fluids 61.2, 2019). The LOSC-LPT introduces polynomial mapping functions into STB calibration in conditions involving gas–solid–liquid interfaces at container walls exhibiting large refractive index variations, which facilitates the realization of particle stereo matching, three-dimensional (3D) triangulation, iterative particle reconstruction, and further refinement of 3D particle position by shaking the LOS. Performance evaluation based on synthetic noise-free images with a particle image density of 0.05 particle per pixel in the presence of refractive interfaces demonstrates that LOSC-LPT can detect a higher number of particles and exhibits lower position uncertainty in the reconstructed particles, resulting in higher accuracy and robustness than that achieved with OpenLPT. In the application to an elliptical jet flow in an octagonal tank with refractive interfaces, the use of polynomial mapping results in smaller errors (mean calibration error <0.1 px) and thus more long trajectories identified by LOSC-LPT (13 000) compared with OpenLPT (4500) which uses the pinhole Tsai model (mean calibration error >1.0 px). Moreover, 3D flow-field reconstructions demonstrate that the LOSC-LPT framework can recover a more accurate 3D Eulerian flow field and capture more complete coherent structures in the flow, and thus holds great potential for widespread application in 3D experimental fluid measurements.
Multi-camera interference (MCI) is an important challenge faced by continuous-wave time-of-flight (C-ToF) cameras. In the presence of other cameras, a C-ToF camera may receive light from other ...cameras' sources, resulting in potentially large depth errors. We propose stochastic exposure coding (SEC), a novel approach to mitigate MCI. In SEC, the camera integration time is divided into multiple time slots. Each camera is turned on during a slot with an optimal probability to avoid interference while maintaining high signal-to-noise ratio (SNR). The proposed approach has the following benefits. First, SEC can filter out both the AC and DC components of interfering signals effectively, which simultaneously achieves high SNR and mitigates depth errors. Second, time-slotting in SEC enables 3D imaging without saturation in the high photon flux regime. Third, the energy savings due to camera turning on during only a fraction of integration time can be utilized to amplify the source peak power, which increases the robustness of SEC to ambient light. Lastly, SEC can be implemented without modifying the C-ToF camera's coding functions, and thus, can be used with a wide range of cameras with minimal changes. We demonstrate the performance benefits of SEC with thorough theoretical analysis, simulations and real experiments, across a wide range of imaging scenarios.
Infrared thermography has been extensively applied over decades to areas such as maintenance of electrical installations. Its use in electrical machinery has been mainly circumscribed to the ...detection of faults in static machines, such as power transformers. However, with regard to the predictive maintenance of rotating electrical machines, its use has been much more limited. In spite of this fact, the potential of this tool, together with the progressive decrease in the price of infrared cameras, makes this technique a very interesting option to at least complement the diagnosis provided by other well-known techniques, such as current or vibration data analysis. In this context, infrared thermography has recently shown potential for the detection of motor failures including misalignments, cooling problems, bearing damages, or connection defects. This work presents several industrial cases that help to illustrate the effectiveness of this technique for the detection of a wide range of faults in field induction motors. The data obtained with this technique made it possible to detect the presence of faults of diverse nature (electrical, mechanical, thermal, and environmental); these data were very useful to either diagnose or complement the diagnosis provided by other tools.
This paper presents a feature-based simultaneous localization and mapping (SLAM) system for panoramic image sequences obtained from a multiple fisheye camera rig in a wide baseline mobile mapping ...system (MMS). First, the developed fisheye camera calibration method combines an equidistance projection model and trigonometric polynomial to achieve high-accuracy calibration from the fisheye camera to an equivalent ideal frame camera, which warrants an accurate transform from the fisheye images to a corresponding panoramic image. Second, we developed a panoramic camera model, corresponding bundle adjustment with a specific back propagation error function, and linear pose initialization algorithm. Third, the implemented feature-based SLAM pipeline consists of several specific strategies and algorithms in initialization, feature matching, frame tracking, and loop closing to overcome the difficulties in tracking wide baseline panoramic image sequences. We conducted experiments on large-scale MMS datasets of more than 15 km trajectories and 14,000 panoramic images as well as small-scale public video datasets. Our results show that the developed panoramic SLAM system PAN-SLAM can achieve fully-automatic camera localization and sparse map reconstruction in both small-scale indoor and large-scale outdoor environments including challenging scenes (e.g., dark tunnel) without the aid of any other sensors. The localization accuracy, which was measured by the absolute trajectory error (ATE), resembled the high-accuracy GNSS/INS reference of 0.1 m. PAN-SLAM also outperformed several feature-based fisheye and monocular SLAM systems with incomparable robustness in various environments. The system could be considered as a robust complementary solution and an alternative to expensive commercial navigation systems, especially in urban environments where signal obstruction and multipath interference are common. Source code and demo are available at http://study.rsgis.whu.edu.cn/pages/download/.
PiCam Venkataraman, Kartik; Lelescu, Dan; Duparré, Jacques ...
ACM transactions on graphics,
11/2013, Letnik:
32, Številka:
6
Journal Article
Recenzirano
Odprti dostop
We present
PiCam
(Pelican Imaging Camera-Array), an ultra-thin high performance monolithic camera array, that captures light fields and synthesizes high resolution images along with a range image ...(scene depth) through integrated parallax detection and superresolution. The camera is passive, supporting both stills and video, low light capable, and small enough to be included in the next generation of mobile devices including smartphones. Prior works Rander et al. 1997; Yang et al. 2002; Zhang and Chen 2004; Tanida et al. 2001; Tanida et al. 2003; Duparré et al. 2004 in camera arrays have explored multiple facets of light field capture - from viewpoint synthesis, synthetic refocus, computing range images, high speed video, and micro-optical aspects of system miniaturization. However, none of these have addressed the modifications needed to achieve the strict form factor and image quality required to make array cameras practical for mobile devices. In our approach, we customize many aspects of the camera array including lenses, pixels, sensors, and software algorithms to achieve imaging performance and form factor comparable to existing mobile phone cameras.
Our contributions to the post-processing of images from camera arrays include a cost function for parallax detection that integrates across multiple color channels, and a regularized image restoration (superresolution) process that takes into account all the system degradations and adapts to a range of practical imaging conditions. The registration uncertainty from the parallax detection process is integrated into a Maximum-a-Posteriori formulation that synthesizes an estimate of the high resolution image and scene depth. We conclude with some examples of our array capabilities such as postcapture (still) refocus, video refocus, view synthesis to demonstrate motion parallax, 3D range images, and briefly address future work.
A multi-camera dome consists of number ofcameras arranged in layers to monitor a hemisphere aroundits center. In volumetric surveillance,a 3D space is required tobemonitoredwhich can be achievedby ...implementing numberof multi-camera domes. A monitoring height is consideredas a constraint to ensure full coverage of the space belowit. Accordingly, the multi-camera dome can be redesignedinto a cylinder such that each of its multiple layers hasdifferent coverage radius. Minimum monitoring constraintsshould be met at all layers. This work is presenting a costoptimized design for the multi-camera dome that maximizesits coverage. The cost per node and number of squaremetersper dollar of multiple configurations are calculated using asearch space of cameras and considering a set of monitoring and coverage constraints. The proposed design is costoptimized per node and provides more coverage as compared to the hemispherical multi-camera dome.
Auroras can be regarded as the most fascinating manifestation of space weather and they are continuously observed by ground‐based and, nowadays more and more, also by space‐based measurements. ...Investigations of auroras and geospace comprise the main research goals of the Suomi 100 nanosatellite, the first Finnish space research satellite, which has been measuring the Earth's ionosphere since its launch on 3 December 2018. In this work, we present a case study where the satellite's camera observations of an aurora over Northern Europe are combined with ground‐based observations of the same event. The analyzed image is, to the authors' best knowledge, the first auroral image ever taken by a CubeSat. Our data analysis shows that a satellite vantage point provides complementary, novel information of such phenomena. The 3D auroral location reconstruction of the analyzed auroral event demonstrates how information from a 2D image can be used to provide location information of auroras under study. The location modeling also suggests that the Earth's limb direction, which was the case in the analyzed image, is an ideal direction to observe faint auroras. Although imaging on a small satellite has some large disadvantages compared with ground‐based imaging (the camera cannot be repaired, a fast moving spinning satellite), the data analysis and modeling demonstrate how even a small 1‐Unit (size: 10 × 10 × 10 cm) CubeSat and its camera, build using cheap commercial off‐the‐shelf components, can open new possibilities for auroral research, especially, when its measurements are combined with ground‐based observations.
Plain Language Summary
Auroras, or polar lights, have been imaged by ground‐based terrestrial cameras for a long time. However, auroras can also be observed from space with cameras mounted on spacecraft. In the last few years, a number of very small satellites, so‐called nanosatellites with masses less than 10 kg, have been developed and launched into so‐called low Earth orbits at altitudes between 300 and 600 km. These small and light satellites are much cheaper than traditional large and heavy satellites, the masses of which could easily exceed hundreds of kilograms. Therefore, it is anticipated that nanosatellites will provide new possibilities to investigate auroras. In this study, we analyze, to our knowledge, the first image of an aurora taken by a nanosatellite. The satellite is a small, 10 × 10 × 10 cm, Suomi 100 satellite which was launched in December 2018. We show how the obtained auroral image can provide new information about auroras when it is combined with ground‐based observations and numerical modeling. When additional cameras will be included in the design and fabrication of small satellites, we will be able to increase our understanding of auroras and, consequently, the effects of the Sun on the Earth and beyond.
Key Points
The concept of imaging aurora toward the Earth's limb by a CubeSat camera is demonstrated
The dark background available in the Earth‐limb viewing direction facilitates imaging of dim auroras, e.g., for auroral tomography purposes
The analysis shows that auroral imaging at low Earth orbit can provide a new context for ground‐based auroral and ionospheric observations