Top- K Sorting is a widely used technique for se-lecting the largest or smallest numbers from input elements. In this paper, we present an efficient low-power and flexible top-K sorting architecture ...with cell gating on field-programmable gate arrays (FPGAs). Our architecture consists of a data filter unit, cell counter, and L-cascaded sorting cells, where the filter unit allows users to select the user-defined data values to sort, the cell counter allows cell gating by counting the number of working cells, and the sorting cells allow sorting with low power and flexible K length with cell gating. The proposed sorting cells update incoming data continuously. In addition, cell gating is introduced to increase the flexibility of top- K length and energy efficiency by turning cells on and off. Our implementation achieves remarkable results with a power consumption of only 0.3W with L=128 and 200MHz on a Xilinx XCKU115 FPGA. Overall, our work contributes to advancing the state of the art in efficient and flexible K sorting on FPG As.
In this article, a 1.25-V 8-Gb, 16-Gb/s/pin GDDR6-based accelerator-in-memory (AiM) is presented. A dedicated command (CMD) set for deep learning (DL) is introduced to minimize latency when switching ...operation modes, and a bank-wide mantissa shift (BWMS) scheme is adopted to minimize calculation delay time, current consumption, and circuit area during multiply-accumulate (MAC) operation. By storing the lookup table (LUT) in the reserved word line in the dynamic random access memory (DRAM) bank cell, it is possible to support various activation functions (AFs), such as Gaussian error linear unit (GELU), sigmoid, and Tanh as well as rectified linear unit (ReLU) and Leaky ReLU. Performance evaluation was conducted by measuring the fabricated chip in ATE and a self-manufactured field-programmable gate array (FPGA)-based system. In the ATE-level evaluation, it operates at 16 Gbps up to a voltage as low as 1.10 V. When evaluated by GEMV and MNIST in the FPGA-based system, it was confirmed that the performance gains of 7.5-10.5 times were possible compared to the HBM2-based or GDDR6-based systems.
This poster presents system architecture, software stack, and performance analysis for SK hynix's very first GDDR6-based processing-in-memory (PIM) product sample, called Accelerator-in-Memory ...(AiM).AiM is designed for the in-memory acceleration of matrix-vector product operations, which are commonly found in machine learning applications. The strength of AiM primarily comes from the two design factors, which are 1) all-bank operation support and 2) extended DRAM command set. All-bank operations allow AiM to fully utilize the abundant internal DRAM bandwidth, which makes it an attractive solution for memory-bound applications. The extended command set allows the host to address these new operations efficiently and provides a clean separation of concerns between the AiM architecture and its software stack design.We present a dedicated FPGA-based reference platform with a software stack, which is used to validate AiM design and evaluate its system-level performance. We also demonstrate FMC-based AiM extension cards that are compatible with the off-the-shelf FPGA boards and serve as an open research platform allowing potential collaborators and academic institutes to access our hardware and software systems.