1...2...3...I Love to Count Lavine, Marc S
Science (American Association for the Advancement of Science),
11/2010, Letnik:
330, Številka:
6008
Journal Article
•Survey on CNN-based approaches for crowd counting and density estimation.•Discussion on recent hand-crafted representations-based methods.•Recently datasets that pose various challenges are ...discussed.•Detailed analysis and comparison of results of CNN-based and traditional methods.•Discussion on future directions and trends for further progress.
Estimating count and density maps from crowd images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning. In addition, techniques developed for crowd counting can be applied to related tasks in other fields of study such as cell microscopy, vehicle counting and environmental survey. The task of crowd counting and density map estimation is riddled with many challenges such as occlusions, non-uniform density, intra-scene and inter-scene variations in scale and perspective. Nevertheless, over the last few years, crowd count analysis has evolved from earlier methods that are often limited to small variations in crowd density and scales to the current state-of-the-art methods that have developed the ability to perform successfully on a wide range of scenarios. The success of crowd counting methods in the recent years can be largely attributed to deep learning and publications of challenging datasets. In this paper, we provide a comprehensive survey of recent Convolutional Neural Network (CNN) based approaches that have demonstrated significant improvements over earlier methods that rely largely on hand-crafted representations. First, we briefly review the pioneering methods that use hand-crafted representations and then we delve in detail into the deep learning-based approaches and recently published datasets. Furthermore, we discuss the merits and drawbacks of existing CNN-based approaches and identify promising avenues of research in this rapidly evolving field.
Kernel-Based Density Map Generation for Dense Object Counting Wan, Jia; Wang, Qingzhong; Chan, Antoni B.
IEEE transactions on pattern analysis and machine intelligence,
2022-March-1, 2022-Mar, 2022-3-1, 20220301, Letnik:
44, Številka:
3
Journal Article
Recenzirano
Crowd counting is an essential topic in computer vision due to its practical usage in surveillance systems. The typical design of crowd counting algorithms is divided into two steps. First, the ...ground-truth density maps of crowd images are generated from the ground-truth dot maps (density map generation), e.g., by convolving with a Gaussian kernel. Second, deep learning models are designed to predict a density map from an input image (density map estimation). The density map based counting methods that incorporate density map as the intermediate representation have improved counting performance dramatically. However, in the sense of end-to-end training, the hand-crafted methods used for generating the density maps may not be optimal for the particular network or dataset used. To address this issue, we propose an adaptive density map generator, which takes the annotation dot map as input, and learns a density map representation for a counter. The counter and generator are trained jointly within an end-to-end framework. We also show that the proposed framework can be applied to general dense object counting tasks. Extensive experiments are conducted on 10 datasets for 3 applications: crowd counting, vehicle counting, and general object counting. The experiment results on these datasets confirm the effectiveness of the proposed learnable density map representations.
Background
Edge‐on‐irradiated silicon detectors are currently being investigated for use in full‐body photon‐counting computed tomography (CT) applications. The low atomic number of silicon leads to ...a significant number of incident photons being Compton scattered in the detector, depositing a part of their energy and potentially being counted multiple times. Even though the physics of Compton scatter is well established, the effects of Compton interactions in the detector on image quality for an edge‐on‐irradiated silicon detector have still not been thoroughly investigated.
Purpose
To investigate and explain effects of Compton scatter on low‐frequency detective quantum efficiency (DQE) for photon‐counting CT using edge‐on‐irradiated silicon detectors.
Methods
We extend an existing Monte Carlo model of an edge‐on‐irradiated silicon detector with 60 mm active absorption depth, previously used to evaluate spatial‐frequency‐based performance, to develop projection and image domain performance metrics for pure density and pure spectral imaging tasks with 30 and 40 cm water backgrounds. We show that the lowest energy threshold of the detector can be used as an effective discriminator of primary counts and cross‐talk caused by Compton scatter. We study the developed metrics as functions of the lowest threshold energy for root‐mean‐square electronic noise levels of 0.8, 1.6, and 3.2 keV, where the intermediate level 1.6 keV corresponds to the noise level previously measured on a single sensor element in isolation. We also compare the performance of a modeled detector with 8, 4, and 2 optimized energy bins to a detector with 1‐keV‐wide bins.
Results
In terms of low‐frequency DQE for density imaging, there is a tradeoff between using a threshold low enough to capture Compton interactions and avoiding electronic noise counts. For 30 cm water phantom, 4 energy bins, and a root‐mean‐square electronic noise of 0.8, 1.6, and 3.2 keV, it is optimal to put the lowest energy threshold at 3, 6, and 1 keV, which gives optimal projection‐domain DQEs of 0.64, 0.59, and 0.52, respectively. Low‐frequency DQE for spectral imaging also benefits from measuring Compton interactions with respective optimal thresholds of 12, 12, and 13 keV. No large dependence on background thickness was observed. For the intermediate noise level (1.6 keV), increasing the lowest threshold from 5 to 35 keV increases the variance in a iodine basis image by 60%–62% (30 cm phantom) and 67%–69% (40 cm phantom), with 8 bins. Both spectral and density DQE are adversely affected by increasing the electronic noise level. Image‐domain DQE exhibits similar qualitative behavior as projection‐domain DQE.
Conclusions
Compton interactions contribute significantly to the density imaging performance of edge‐on‐irradiated silicon detectors. With the studied detector topology, the benefit of counting primary Compton interactions outweighs the penalty of multiple counting at all lowest threshold energies. Compton interactions also contribute significantly to the spectral imaging performance for measured energies above 10 keV.
Accurate people count estimation, potentially in real-time, both for indoor and outdoor environments, is said to be of major importance in the smart cities of tomorrow. Application areas, such as ...public transportation, urban analytics, building automation, as well as disaster management are all expected to benefit if they were to have a better understanding of occupancy in public premises. A large body of work has been concentrated into providing people counting solutions based on images captured by surveillance cameras. However, image-based approaches are costly, as they require devoted hardware installations, and are often privacy intruding. Thus, academic and industry researchers are looking into alternative solutions for people counting. In this paper, we present a comprehensive study of non-image-based people counting techniques. Our goal with this paper is twofold: 1) to serve as an introduction to everyone interested in gaining a better understanding on non-image-based people counting techniques and 2) to serve as a guideline to practitioners interested in implementing and testing specific solutions in their everyday practice. To this end, we provide a novel classification of available approaches, and outline the requirements they need to meet. We further discuss in detail different academic solutions, and provide comparisons between them. Furthermore, we provide a discussion on available industrial approaches and compare them to academic proposals. Finally, we discuss open challenges and future directions in the field of non-image-based people counting.
Purpose
Smaller pixel sizes of x‐ray photon counting detectors (PCDs) benefit count rate capabilities but increase cross‐talk and “double‐counting” between neighboring PCD pixels. When an x‐ray ...photon produces multiple (n) counts at neighboring (sub‐)pixels and they are added during post‐acquisition N × N binning process, the variance of the final PCD output‐pixel will be larger than its mean. In the meantime, anti‐scatter grids are placed at the pixel boundaries in most of x‐ray CT systems and will decrease cross‐talk between sub‐pixels because the grids mask sub‐pixels underneath them, block the primary x‐rays, and increase the separation distance between active sub‐pixels. The aim of this paper was, first, to study the PCD statistics with various N × N binning schemes and three different masking methods in the presence of cross‐talks, and second, to assess one of the most fundamental performances of x‐ray CT: soft tissue contrast visibility.
Methods
We used a PCD cross‐talk model (Photon counting toolkit, PcTK) and produced cross‐talk data between 3 × 3 neighboring sub‐pixels and calculated the mean, variance, and covariance of output‐pixels with each of N × N binning scheme 4 × 4 binning, 2 × 2 binning, and 1 × 1 binning (i.e., no binning) and three different sub‐pixel masking methods (no mask, 1‐D mask, and 2‐D mask). We then set up simulation to evaluate the soft tissue contrast visibility. X‐rays of 120 kVp were attenuated by 10–40 cm‐thick water, with the right side of PCDs having 0.5 cm thicker water than the left side. A pair of output‐pixels across the left‐right boundary were used to assess the sensitivity index (SI or d′), which typically ranges 0–1 and is a generalized signal‐to‐noise ratio and a statistics used in signal detection theory.
Results
Binning a larger number of sub‐pixels resulted in larger mean counts and larger variance‐to‐mean ratio when the lower threshold of the energy window was lower than the half of the incident energy. Mean counts are in the order of no mask (the largest), 1‐D mask, and 2‐D mask but the difference in variance‐to‐mean ratio was small. For a given sub‐pixel size and masking method, binning more sub‐pixels degraded the normalized SI values but the difference between 4 × 4 binning and 1 × 1 binning was typically less than 0.06. 1‐D mask provided better normalized SI values than no mask and 2‐D mask for side‐by‐side case and the improvements were larger with fewer binnings, although the difference was less than 0.10. 2‐D mask was the best for embedded case. The normalized SI values of combined binning, sub‐pixel size, and masking were in the order of 1 × 1 (900 μm)2 binning, 2 × 2 (450 μm)2 binning, and 4 × 4 (225 μm)2 binning for a given masking method but the difference between each of them were typically 0.02–0.05.
Conclusion
We have evaluated the effect of double‐counting between PCD sub‐pixels with various binning and masking methods. SI values were better with fewer number of binning and larger sub‐pixels. The difference among various binning and masking methods, however, was typically less than 0.06, which might result in a dose penalty of 13% if the CT system were linear.
•We study the connection between field normalization and counting methods.•Our focus is on the choice between full and fractional counting.•We argue that full counting results are not properly field ...normalized.•Fractional counting does yield properly field-normalized results.•We present a large-scale empirical comparison between full and fractional counting.
Bibliometric studies often rely on field-normalized citation impact indicators in order to make comparisons between scientific fields. We discuss the connection between field normalization and the choice of a counting method for handling publications with multiple co-authors. Our focus is on the choice between full counting and fractional counting. Based on an extensive theoretical and empirical analysis, we argue that properly field-normalized results cannot be obtained when full counting is used. Fractional counting does provide results that are properly field normalized. We therefore recommend the use of fractional counting in bibliometric studies that require field normalization, especially in studies at the level of countries and research organizations. We also compare different variants of fractional counting. In general, it seems best to use either the author-level or the address-level variant of fractional counting.
Cell counting is fundamental measurement in the biosciences and is critical for cell-based assays, as well as in the manufacturing control of cell-derived products and release of cell-based products. ...An inaccurate or imprecise cell count can lead to significant bias and variability in cell-based assays. There are many approaches to measure a cell count however there is no consensus on a reference method. Imaging and flow-based methods rely on user-based gating of cells, debris, and various cell populations. In contrast, cell count based on the number of genome copies provides a biologically based definition of cells that can easily be transferred from site to site. Additionally, this approach can identify and count cell populations with specified genomic markers. We are implementing droplet digital polymerase chain reaction (ddPCR) to quantify the number of genome copies and then convert this value to a cell count. Digital PCR is becoming a common technique in cell-based product testing, and this approach allows the critical cell count value to be obtained simultaneously with other dPCR assays.
For method development, we are using a set of clonal Jurkat cell lines with defined copy numbers of a reference lentiviral vector integrated into their genomes. DNA extraction and purification was conducted using the Qiagen column-based DNeasy kit. We characterize the cell cycle population distribution and incorporate this analysis in the calculation of cell count by estimating an average genome copies per cell. We have also incorported a method using a DNA control spike-in to quantify DNA recovery during extraction that is independent of assumptions of starting cell count. Elution volume was also accounted for in our calculation of cell count from ddPCR analysis. Using the ISO 23091-2:2019 Dilution series experimental design and statistical analysis for evaluating cell counting quality, we compared the proportionality and precision of the ddPCR based cell count with trypan blue based and AO/DAPI based cell counting methods. We found that proportionality and precision of ddPCR based cell counts can be similar to that of more traditional cell counting methods. Bias was detected between ddPCR based counts and trypan blue and AO/DAPI measurements, however the level of bias was highly dependent on the % recovery correction applied to the ddPCR data. Variability in the ddPCR cell counts was improved when accounting for elution volume in column-based DNA purification.