► Review major modules and research topics on multi-camera video surveillance. ► Review technologies from the perspective of computer vision and pattern recognition. ► Detailed descriptions of ...technical challenges and comparison of different solutions. ► Emphasizes the connection and integration of different modules. ► Some problems can be jointly solve to improve the efficiency and accuracy.
Intelligent multi-camera video surveillance is a multidisciplinary field related to computer vision, pattern recognition, signal processing, communication, embedded computing and image sensors. This paper reviews the recent development of relevant technologies from the perspectives of computer vision and pattern recognition. The covered topics include multi-camera calibration, computing the topology of camera networks, multi-camera tracking, object re-identification, multi-camera activity analysis and cooperative video surveillance both with active and static cameras. Detailed descriptions of their technical challenges and comparison of different solutions are provided. It emphasizes the connection and integration of different modules in various environments and application scenarios. According to the most recent works, some problems can be jointly solved in order to improve the efficiency and accuracy. With the fast development of surveillance systems, the scales and complexities of camera networks are increasing and the monitored environments are becoming more and more complicated and crowded. This paper discusses how to face these emerging challenges.
Contentious debate is currently taking place regarding the extent to which public scrutiny of the police post‐Ferguson has led to depolicing or to a decrease in proactive police work. Advocates of ...the “Ferguson effect” claim the decline in proactive policing increased violent crime and assaults on the police. Although police body‐worn cameras (BWCs) are touted as a police reform that can generate numerous benefits, they also represent a form of internal and public surveillance on the police. The surveillance aspect of BWCs suggests that BWCs may generate depolicing through camera‐induced passivity. We test this question with data from a randomized controlled trial of BWCs in Spokane (WA) by assessing the impact of BWCs on four measures: officer‐initiated calls, arrests, response time, and time on scene. We employ hierarchical linear and cross‐classified models to test for between‐ and within‐group differences in outcomes before and after the randomized BWC rollout. Our results demonstrate no evidence of statistically significant camera‐induced passivity across any of the four outcomes. In fact, self‐initiated calls increased for officers assigned to treatment during the RCT. We discuss the theoretical and policy implications of the findings for the ongoing dialogue in policing.
This paper proposes a near-central camera model and its solution approach. 'Near-central' refers to cases in which the rays do not converge to a point and do not have severely arbitrary directions ...(non-central cases). Conventional calibration methods are difficult to apply in such cases. Although the generalized camera model can be applied, dense observation points are required for accurate calibration. Moreover, this approach is computationally expensive in the iterative projection framework. We developed a noniterative ray correction method based on sparse observation points to address this problem. First, we established a smoothed three-dimensional (3D) residual framework using a backbone to avoid using the iterative framework. Second, we interpolated the residual by applying local inverse distance weighting on the nearest neighbor of a given point. Specifically, we prevented excessive computation and the deterioration in accuracy that may occur in inverse projection through the 3D smoothed residual vectors. Moreover, the 3D vectors can represent the ray directions more accurately than the 2D entities. Synthetic experiments show that the proposed method can achieve prompt and accurate calibration. The depth error is reduced by approximately 63% in the bumpy shield dataset, and the proposed approach is noted to be two digits faster than the iterative methods.
In this paper, a novel multiple-camera-based visible light positioning (MC-VLP) algorithm is proposed for indoor three-dimensional (3D) positioning. The basic idea of MC-VLP is to utilize multiple ...cameras to simultaneously capture the beacon signal transmitted by a light-emitting diode (LED) mounted on the mobile terminal. Since the pinhole camera can achieve high imaging accuracy and the fish-eye camera has a wide field of view (FOV), both of them can be adopted as the receiver to improve the positioning accuracy and coverage area. Meanwhile, the positioning errors along <inline-formula> <tex-math notation="LaTeX">X </tex-math></inline-formula>-axis, <inline-formula> <tex-math notation="LaTeX">Y </tex-math></inline-formula>-axis, and <inline-formula> <tex-math notation="LaTeX">Z </tex-math></inline-formula>-axis are derived to evaluate the performance of the MC-VLP algorithm. Based on the derived expression of positioning error, we optimize the camera layout and tilt angle to minimize the positioning error in the coverage area. Simulation results show that higher positioning accuracy can be achieved via optimizing the camera layout and tilt angle. Experimental results show that the mean 3D positioning accuracy is 1.7 mm with 3 pinhole cameras at height 2.5 m.
We present an algorithm that simultaneously calibrates two color cameras, a depth camera, and the relative pose between them. The method is designed to have three key features: accurate, practical, ...and applicable to a wide range of sensors. The method requires only a planar surface to be imaged from various poses. The calibration does not use depth discontinuities in the depth image, which makes it flexible and robust to noise. We apply this calibration to a Kinect device and present a new depth distortion model for the depth sensor. We perform experiments that show an improved accuracy with respect to the manufacturer's calibration.
Color-depth cameras (RGB-D cameras) have become the primary sensors in most robotics systems, from service robotics to industrial robotics applications. Typical consumer-grade RGB-D cameras are ...provided with a coarse intrinsic and extrinsic calibration that generally does not meet the accuracy requirements needed by many robotics applications e.g., highly accurate three-dimensional (3-D) environment reconstruction and mapping, high precision object recognition, localization, etc.. In this paper, we propose a human-friendly, reliable, and accurate calibration framework that enables to easily estimate both the intrinsic and extrinsic parameters of a general color-depth sensor couple. Our approach is based on a novel two components error model. This model unifies the error sources of RGB-D pairs based on different technologies, such as structured-light 3-D cameras and time-of-flight cameras. Our method provides some important advantages compared to other state-of-the-art systems: It is general (i.e., well suited for different types of sensors), based on an easy and stable calibration protocol, provides a greater calibration accuracy, and has been implemented within the robot operating system robotics framework. We report detailed experimental validations and performance comparisons to support our statements.
Monitoring ephemeral and intermittent streams is a major challenge in hydrology. On-site inspections may be impractical in difficult-to-access environments. Motivated by the latest advancements in ...digital cameras and computer vision techniques, in this work, we describe the development and application of a stage-camera system to monitor the water level in ungauged headwater streams. The system encompasses a consumer-grade wildlife camera with near-infrared (NIR) night vision capabilities and a white pole that serves as reference object in the collected images. The feasibility of the approach is demonstrated through a set of benchmark experiments performed in natural settings. Maximum mean absolute errors between stage-camera and reference data are approximately equal to 2 cm in the worst scenario that corresponds to severe storms with intense rainfall and fog. Our preliminary results are encouraging and support the scalability of the stage-camera in future implementations in a wide range of natural settings.
Radio-frequency technologies are widely applied in many fields such as mobile systems, healthcare systems, television and radio broadcasting, and satellite communications. However, one major problem ...in wireless communication based on radio frequencies is its impact on human health. High frequencies adversely impact human health more than low frequencies if the signal power transgresses the permissible threshold. Therefore, researchers are investigating the use of visible light waves (instead of the radio-frequency band) for data transmission in three major areas: visible light communication, light fidelity, and optical camera communication. In this paper, we propose a scheme that upgrades the camera on-off keying (COOK) scheme by using it with the multiple-input multiple-output (MIMO) scheme; COOK has been recommended by the IEEE 802.15.7-2018 standard. By applying technologies, such as matched filter, region of interest, and MIMO, our proposed scheme promises to improve the performance of the conventional scheme by improving the data rate, communication distance, and bit error rate. By controlling the exposure time, the focal length in a single camera and using channel coding, our proposed scheme can achieve the communication distance of up to 20 m, with a low error rate.
The discharge phase and time evolution of a 150 kHz high-power pulse burst discharge were observed. A vacuum chamber was constructed by connecting glass tubes on which a solenoid coil was wound. ...Burst pulses with a width of 1000 μs and a repetition rate of 10 Hz were applied to the solenoid coil. A high-speed video camera and an intensified CCD camera were used to record photographs of the discharges. Observation of the discharge phase using a high-speed camera showed that the discharge occurs at the time of 40 μs and propagates from the wall of the cylindrical reactor. Over time, the discharge pattern evolves, and a branched pattern appears. The number of the branches changes with time. The discharge blinks synchronize with the instantaneous power, which suggests that the discharge is generated and maintained by the electrostatic field generated by the sides of the coil. The propagation velocity calculated from downstream decreases with increasing pressure and increases with increasing power.