In the last few decades, there has been a significant increase in interest in developing, constructing, and using structural health monitoring (SHM) systems. The classic monitoring system should, by ...definition, have, in addition to the diagnostic module, a module responsible for monitoring loads. These loads can be measured with piezoelectric force sensors or indirectly with strain gauges such as resistance strain gauges or FBG sensors. However, this is not always feasible due to how the force is applied or because sensors cannot be mounted. Therefore, methods for identifying excitation forces based on response measurements are often used. This approach is usually cheaper and easier to implement from the measurement side. However, in this approach, it is necessary to use a network of response sensors, whose installation and wiring can cause technological difficulties and modify the results for slender constructions. Moreover, many load identification methods require the use of multiple sensors to identify a single force history. Increasing the number of sensors recording responses improves the numerical conditioning of the method. The proposed article presents the use of contactless measurements carried out with the help of a high-speed camera to identify the forces exiting the object.
Learning to See Through With Events Yu, Lei; Zhang, Xiang; Liao, Wei ...
IEEE transactions on pattern analysis and machine intelligence,
2023-July-1, 2023-Jul, 2023-7-1, 20230701, Letnik:
45, Številka:
7
Journal Article
Recenzirano
Odprti dostop
Although synthetic aperture imaging (SAI) can achieve the seeing-through effect by blurring out off-focus foreground occlusions while recovering in-focus occluded scenes from multi-view images, its ...performance is often deteriorated by dense occlusions and extreme lighting conditions. To address the problem, this paper presents an Event-based SAI (E-SAI) method by relying on the asynchronous events with extremely low latency and high dynamic range acquired by an event camera. Specifically, the collected events are first refocused by a Refocus-Net module to align in-focus events while scattering out off-focus ones. Following that, a hybrid network composed of spiking neural networks (SNNs) and convolutional neural networks (CNNs) is proposed to encode the spatio-temporal information from the refocused events and reconstruct a visual image of the occluded targets. Extensive experiments demonstrate that our proposed E-SAI method can achieve remarkable performance in dealing with very dense occlusions and extreme lighting conditions and produce high-quality images from pure events. Codes and datasets are available at https://dvs-whu.cn/projects/esai/ .
Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance ...and night vision tool for the military, but recently the price has dropped, significantly opening up a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras.
Cameras have been widely used in traffic operations. While many technologically smart camera solutions in the market can be integrated into Intelligent Transport Systems (ITS) for automated ...detection, monitoring and data generation, many Network Operations (a.k.a Traffic Control) Centres still use legacy camera systems as manual surveillance devices. In this paper, we demonstrate effective use of these older assets by applying computer vision techniques to extract traffic data from videos captured by legacy cameras. In our proposed vision-based pipeline, we adopt recent state-of-the-art object detectors and transfer-learning to detect vehicles, pedestrians, and cyclists from monocular videos. By weakly calibrating the camera, we demonstrate a novel application of the image-to-world homography which gives our monocular vision system the efficacy of counting vehicles by lane and estimating vehicle length and speed in real-world units. Our pipeline also includes a module which combines a convolutional neural network (CNN) classifier with projective geometry information to classify vehicles. We have tested it on videos captured at several sites with different traffic flow conditions and compared the results with the data collected by piezoelectric sensors. Our experimental results show that the proposed pipeline can process 60 frames per second for pre-recorded videos and yield high-quality metadata for further traffic analysis.
Accurate three-dimensional displacement measurements of bridges and other structures have received significant attention in recent years. The main challenges of such measurements include the cost and ...the need for a scalable array of instrumentation. This paper presents a novel Hybrid Inertial Vision-Based Displacement Measurement (HIVBDM) system that can measure three-dimensional structural displacements by using a monocular charge-coupled device (CCD) camera, a stationary calibration target, and an attached tilt sensor. The HIVBDM system does not require the camera to be stationary during the measurements, while the camera movements, i.e., rotations and translations, during the measurement process are compensated by using a stationary calibration target in the field of view (FOV) of the camera. An attached tilt sensor is further used to refine the camera movement compensation, and better infers the global three-dimensional structural displacements. This HIVBDM system is evaluated on both short-term and long-term synthetic static structural displacements, which are conducted in an indoor simulated experimental environment. In the experiments, at a 9.75 m operating distance between the monitoring camera and the structure that is being monitored, the proposed HIVBDM system achieves an average of 1.440 mm Root Mean Square Error (RMSE) on the in-plane structural translations and an average of 2.904 mm RMSE on the out-of-plane structural translations.
We present Submerse , an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data ...and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.
Thermal imaging is an important source of information for geographic information systems (GIS) in various aspects of environmental research. This work contains a variety of experiences related to the ...use of the Yuneec E10T thermal imaging camera with a 320 × 240 pixel matrix and 4.3 mm focal length dedicated to working with the Yuneec H520 UAV in obtaining data on the natural environment. Unfortunately, as a commercial product, the camera is available without radiometric characteristics. Using the heated bed of the Omni3d Factory 1.0 printer, radiometric calibration was performed in the range of 18–100 °C (high sensitivity range–high gain settings of the camera). The stability of the thermal camera operation was assessed using several sets of a large number of photos, acquired over three areas in the form of aerial blocks composed of parallel rows with a specific sidelap and longitudinal coverage. For these image sets, statistical parameters of thermal images such as the mean, minimum and maximum were calculated and then analyzed according to the order of registration. Analysis of photos taken every 10 m in vertical profiles up to 120 m above ground level (AGL) were also performed to show the changes in image temperature established within the reference surface. Using the established radiometric calibration, it was found that the camera maintains linearity between the observed temperature and the measured brightness temperature in the form of a digital number (DN). It was also found that the camera is sometimes unstable after being turned on, which indicates the necessity of adjusting the device’s operating conditions to external conditions for several minutes or taking photos over an area larger than the region of interest.
At present, there is a problem that the measurement accuracy and measurement range cannot be balanced in the measurement of shaft diameter by the machine vision method. In this paper, we propose a ...large-scale shaft diameter precision measurement method based on a dual camera measurement system. The unified world coordinate system of the two cameras is established by analyzing the dual camera imaging model and obtaining the measurement formula. In order to verify the validity of the proposed method, two black blocks in the calibration plate with a known center distance of 100 mm were measured. The mean value was 100.001 mm and the standard deviation was 0.00039 in 10 measurements. Finally, the proposed system was applied to the diameter measurement of a complexed crankshaft. The mean μ95 values of CMM and the proposed method were ±1.02 μm and ±1.07 μm, respectively, indicating that the measurement accuracy of the proposed method is roughly equal to the CMM.
This paper presents a novel method for fully automatic and convenient extrinsic calibration of a 3D LiDAR and a panoramic camera with a normally printed chessboard. The proposed method is based on ...the 3D corner estimation of the chessboard from the sparse point cloud generated by one frame scan of the LiDAR. To estimate the corners, we formulate a full-scale model of the chessboard and fit it to the segmented 3D points of the chessboard. The model is fitted by optimizing the cost function under constraints of correlation between the reflectance intensity of laser and the color of the chessboard’s patterns. Powell’s method is introduced for resolving the discontinuity problem in optimization. The corners of the fitted model are considered as the 3D corners of the chessboard. Once the corners of the chessboard in the 3D point cloud are estimated, the extrinsic calibration of the two sensors is converted to a 3D-2D matching problem. The corresponding 3D-2D points are used to calculate the absolute pose of the two sensors with Unified Perspective-n-Point (UPnP). Further, the calculated parameters are regarded as initial values and are refined using the Levenberg-Marquardt method. The performance of the proposed corner detection method from the 3D point cloud is evaluated using simulations. The results of experiments, conducted on a Velodyne HDL-32e LiDAR and a Ladybug3 camera under the proposed re-projection error metric, qualitatively and quantitatively demonstrate the accuracy and stability of the final extrinsic calibration parameters.
Akatsuki has been in operation since Venus orbit insertion-revenge 1 (VOI-R1) in December 2015 and has been making observations of Venus’ cloud-top temperature with Longwave Infrared Camera (LIR) ...since the start of nominal observations in April 2016. LIR was originally designed to maintain its performance for at least 4 years after the VOI originally planned in December 2010. Although the operation time of LIR has exceeded its designed lifetime as of August 2022, it is still functioning normally. The mechanical shutter plate has been kept at a normal temperature and used as a hot reference in determining the brightness temperature of objects when in the closed position. Since the observed temperature of the background deep space is merely a value representing the output for no radiation input, it should be the same in any observation. This was around 180 K just after the launch of Akatsuki in May 2010; however, it has gradually increased to approximately 200 K by February 2022. Average Venus disk temperatures also show a slight increasing trend. The increases of the background and Venus’ disk temperatures are most likely due to degradation of the sensitivity of the bolometer array used in LIR as an image sensor. These temperatures have apparently been increasing since LIR was activated in October 2016. While LIR is activated, the bolometer temperature is kept at 40 °C and a moderate baking effect may have accelerated degassing in the bolometer package, and the resulting increase of thermal conductivity or decrease of transmittance of the window contaminated by evaporated components may have degraded the sensitivity of the bolometer. A sensitivity degradation of 5% from October 2016 to February 2022 is estimated from the increasing trend of the background temperature. A correction has been made to the LIR data to keep the background temperature constant. The corrected data show no increasing trend in either the background or Venus’ disk temperature. The corrected data are open to the public as a more reliable dataset for investigating the long-term variability of thermal condition at cloud-top altitudes.
Graphical Abstract