We present a search for a new narrow, spin-1, high mass resonance decaying to $\mu^+\mu^- + X $, using a matrix element based likelihood and a simultaneous measurement of the resonance mass and ...production rate. In data with 4.6 fb$^{-1}$ of integrated luminosity collected by the CDF detector in $p\bar{p}$ collisions at $\sqrt{s}=1960$ GeV, the most likely signal cross section is consistent with zero at 16\% confidence level. We therefore do not observe evidence for a high mass resonance, and place limits on models predicting spin-1 resonances, including $M > 1071$ GeV/$c^2$ at 95\% confidence level for a $Z'$ boson with the same couplings to fermions as the $Z$ boson.
We report on a measurement of $b$-hadron lifetimes in the fully reconstructed decay modes B^+ -->J/Psi K+, B^0 --> J/Psi K*, B^0 --> J/Psi Ks, and Lambda_b --> J/Psi Lambda using data corresponding ...to an integrated luminosity of 4.3 ${\rm fb}^{-1}$, collected by the CDF II detector at the Fermilab Tevatron. The measured lifetimes are $\tau$B^+ = $1.639 \pm 0.009 ~({\rm stat}) \pm 0.009~{\rm (syst) ~ ps}$, $\tau$B^0 = $1.507 \pm 0.010 ~({\rm stat}) \pm 0.008~{\rm (syst) ~ ps}$ and $\tau$Lambda_b = $1.537 \pm 0.045 ~({\rm stat}) \pm 0.014~{\rm (syst) ~ ps}$. The lifetime ratios are $\tau$B^+/$\tau$B^0 = $1.088 \pm 0.009~({\rm stat})\pm 0.004~({\rm syst})$ and $\tau$Lambda_b/$\tau$B^0 = $1.020 \pm 0.030~({\rm stat})\pm 0.008~({\rm syst})$. These are the most precise determinations of these quantities from a single experiment.
We use a new method to estimate with 5% accuracy the contribution of pion and kaon in-flight-decays to the dimuon data set acquired with the CDF detector. Based on this improved estimate, we show ...that the total number and the properties of the collected dimuon events are not yet accounted for by ordinary sources of dimuons which also include the contributions, as measured in the data, of heavy flavor, $\Upsilon$, and Drell-Yan production in addition to muons mimicked by hadronic punchthrough.
Deep n-well MAPS in a 130nm CMOS technology: Beam test results Neri, N.; Avanzini, C.; Batignani, G. ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
11/2010, Letnik:
623, Številka:
1
Journal Article
Recenzirano
We report on recent beam test results for the APSEL4D chip, a new deep n-well MAPS prototype with a full in-pixel signal processing chain obtained by exploiting the triple well option of the CMOS ...0.13μm process. The APSEL4D chip consists of a 4096 pixel matrix (32 rows and 128 columns) with 50×50μm2 pixel cell area, with custom readout architecture capable of performing data sparsification at pixel level. APSEL4D has been characterized in terms of charge collection efficiency and intrinsic spatial resolution under different conditions of discriminator threshold settings using a 12GeV/c proton beam in the T9 area of the CERN PS. We observe a maximum hit efficiency of 92% and we estimate an intrinsic resolution of about 14μm. The data driven approach of the tracking detector readout chips has been successfully used to demonstrate the possibility to build a Level 1 trigger system based on associative memories. The analysis of the beam test data is critically reviewed along with the characterization of the device under test.
We describe an important advancement for the Associative Memory device (AM). The AM is a VLSI processor for pattern recognition based on Content Addressable Memory (CAM) architecture. The AM is ...optimized for on-line track finding in high-energy physics experiments. Pattern matching is carried out by finding track candidates in coarse resolution "roads". A large AM bank stores all trajectories of interest, called "patterns", for a given detector resolution. The AM extracts roads compatible with a given event during detector read-out. Two important variables characterize the quality of the AM bank: its "coverage" and the level of fake roads. The coverage, which describes the geometric efficiency of a bank, is defined as the fraction of tracks that match at least one pattern in the bank. Given a certain road size, the coverage of the bank can be increased just adding patterns to the bank, while the number of fakes unfortunately is roughly proportional to the number of patterns in the bank. Moreover, as the luminosity increases, the fake rate increases rapidly because of the increased silicon occupancy. To counter that, we must reduce the width of our roads. If we decrease the road width using the current technology, the system will become very large and extremely expensive. We propose an elegant solution to this problem: the "variable resolution patterns". Each pattern and each detector layer within a pattern will be able to use the optimal width, but we will use a "don't care" feature (inspired from ternary CAMs) to increase the width when that is more appropriate. In other words we can use patterns of variable shape. As a result we reduce the number of fake roads, while keeping the efficiency high and avoiding excessive bank size due to the reduced width. We describe the idea, the implementation in the new AM design and the implementation of the algorithm in the simulation. Finally we show the effectiveness of the "variable resolution patterns" idea using simulated high occupancy events in the ATLAS detector.
The Silicon Vertex Trigger (SVT) provides the CDF experiment with a powerful tool for fast and precise track finding and fitting at trigger level. The system enhances the experiment's reach on ...B-physics and large
P
T
-physics coupled to b quarks. We review the main design features and the performance of the SVT with particular attention to the recent upgrade that improved its capabilities. Finally, we will focus on additional improvements of the functionality of such a system in a more general experimental context.
Real time image analysis has undergone a rapid development in the last few years, due to the increasing availability of low cost computational power. However computing power is still a limit for some ...high quality applications. Highresolution medical image processing, for example, are strongly demanding for both memory (~250 MB) and computational capabilities allowing for 3D processing in affordable time. We propose the reduction of execution time of image processing exploiting the computing power of parallel arrays of Field Programmable Gate Arrays (FPGAs). We apply this idea to an algorithm that finds clusters of contiguous pixels above a certain programmable threshold and process them to produce measurements that characterize their shape. It is a fast general-purpose algorithm for high-throughput clustering of data "with a two dimensional organization". The two-dimensional problem is well processed by FPGAs since their available logic is naturally organized into a 2-dimensional array. The algorithm is designed to be implemented with FPGAs but it can also profit of cheaper custom electronics. The key feature is a very short processing time that scales linearly with the amount of data to be processed. This means that clustering can be performed in pipeline with the image acquisition, without suffering from combinatorial delays due to looping multiple times through the whole amount of data.
The Silicon-Vertex-Trigger (SVT) is a processor developed at CDF experiment to perform online fast and precise track reconstruction. SVT is made of two pipelined processors: the Associative Memory ...finds low precision tracks looking for coincidences between hits from the tracking detectors and track candidates (roads) stored in memory; the Track Fitter refines the track quality whith high precision fits. The GigaFitter is a next generation track fitter, developed to reduce the degradation of the SVT efficiency due to the increasing instantaneous luminosity. It reduces the track parameter reconstruction to a few clock cycles and can perform many fits in parallel, thus allowing high resolution tracking at very high rate. The core of the GigaFitter is implemented in a modern Xilinx Virtex-5 FPGA chip, rich of powerful DSP arrays. With respect to the current Track Fitter, the GigaFitter is faster and provided of much more memory to store a greater number of roads to be used in the fit; this results in an increased SVT efficiency as more track candidates can be reconstructed. The GigaFitter has been installed in parasitic mode at CDF and has been tested against the current Track Fitter. We will describe the GigaFitter architecture, the parasitic installation at CDF and the performances with respect to the current system.
As the LHC luminosity is ramped up to 3×10 34 cm -2 s -1 and beyond, the high rates, multiplicities, and energies of particles seen by the detectors will pose a unique challenge. Only a tiny fraction ...of the produced collisions can be stored on tape and immense real-time data reduction is needed. An effective trigger system must maintain high trigger efficiencies for the physics we are most interested in, and at the same time suppress the enormous QCD backgrounds. This requires massive computing power to minimize the online execution time of complex algorithms. A multi-level trigger is an effective solution for an otherwise impossible problem. The Fast Tracker (FTK) is a proposed upgrade to the current ATLAS trigger system that will operate at full Level-1 output rates and provide high quality tracks reconstructed over the entire detector by the start of processing in Level-2. FTK solves the combinatorial challenge inherent to tracking by exploiting massive parallelism of associative memories that can compare inner detector hits to millions of pre-calculated patterns simultaneously. The tracking problem within matched patterns is further simplified by using pre-computed linearized fitting constants and leveraging fast DSPs in modern commercial FPGAs. Overall, FTK is able to compute the helix parameters for all tracks in an event and apply quality cuts in less than 100 μs. The system design is defined and studied with respect to high transverse momentum (high-P T ) Level-2 objects: b-jets, tau-jets, and isolated leptons. We test FTK algorithms using ATLAS full simulation with WH events up to 3×10 34 cm -2 s -1 luminosity and comparing FTK results with the offline tracking capability. We present the architecture and the reconstruction performances for the mentioned high-P T Level-2 objects.
We report on further developments of our recently proposed design approach for a full in-pixel signal processing chain of deep n-well (DNW) MAPS sensors, by exploiting the triple well option of a ...CMOS 0.13 μm process. The optimization of the collecting electrode geometry and the re-design of the analog circuit to decrease power consumption have been implemented in two versions of the APSEL chip series, namely "APSEL3T1" and "APSEL3T2". The results of the characterization of 3x3 pixel matrices with full analog output with photons from 55 Fe and electrons from 90 Sr are described. Pixel equivalent noise charge (ENC) of 46 e- and 36 e- have been measured for the two versions of the front-end implemented toghether with signal-to-noise ratios between 20 and 30 for Minimum Ionizing Particles. In order to fully exploit the readout capabilities of our MAPS, a dedicated fast readout architecture performing on-chip data sparsification and providing the timing information for the hits has been implemented in the prototype chip "APSEL4D", having 4096 pixels. The criteria followed in the design of the readout architecture are reviewed. The implemented readout architecture is data-driven and scalable to chips larger than the current one, which has 32 rows and 128 columns. Tests concerning the functional characterization of the chip and response to radioactive sources have shown encouraging preliminary results. A successful beam test took place in September 2008. Preliminary measurements of the APSEL4D charge collection efficiency and resolution confirmed the DNW device is working well. Moreover the data driven approach of the readout chips has been successfully used to demonstrate the possibility to build a Level 1 trigger system based on Associative Memories.