Database Management Systems (DBMS) have be-come an essential tool for industry and research and are often a significant component of data centers. There have been many efforts to accelerate DBMS ...application performance. One of the most explored techniques is the use of vector processing. Unfortunately, conventional vector architectures have not been able to exploit the full potential of DBMS acceleration.In this paper, we present VAQUERO, our Scratchpad-based Vector Accelerator for QUEry pROcessing. VAQUERO improves the efficiency of vector architectures for DBMS operations such as data aggregation and hash joins featuring lookup tables. Lookup tables are significant contributors to the performance bottlenecks in DBMS processing suffering from insufficient ISA support in the form of scatter-gather instructions. VAQUERO introduces a novel Advanced Scratchpad Memory specifically designed with two mapping modes - direct- and associative-mode. These map-ping modes enable VAQUERO to accelerate real-world databases with workload sizes that significantly exceed the scratchpad memory capacity. Additionally, the associative-mode allows to use VAQUERO with DBMS operators that use hashed keys, e.g. hash-join and hash-aggregate. VAQUERO has been designed considering general DBMS algorithm requirements instead of being based on a particular database organization. For this reason, VAQUERO is capable to accelerate DBMS operators for both row- and column-oriented databases.In this paper, we evaluate the efficiency of VAQUERO using two highly optimized popular open-source DBMS, namely the row-based PostgreSQL and column-based MonetDB. We imple-mented VAQUERO at the RTL level and prototype it, by performing Place&Route, at the 7nm technology node. VAQUERO incurs a modest 0.15% area overhead compared with an Intel Ice Lake processor. Our evaluation shows that VAQUERO significantly outperforms PostgreSQL and MonetDB by 2.09× and 3.32× respectively, when processing operators and queries from the TPC-H benchmark.
Sparse matrix operations are critical kernels in multiple application domains such as High Performance Computing, artificial intelligence and big data. Vector processing is widely used to improve ...performance on mathematical kernels with dense matrices. Unfortunately, existing vector architectures do not cope well with sparse matrix computations, achieving much lower performance in comparison with their dense counterparts.To overcome this limitation, we present the Vector Indexed Architecture (VIA), a novel hardware vector architecture that accelerates applications with irregular memory access patterns such as sparse matrix computations. There are two main bottlenecks when computing with sparse matrices: irregular memory accesses and index matching. VIA addresses these two bottlenecks with a smart scratchpad that is tightly coupled to the Vector Functional Units within the core.Thanks to this structure, VIA improves locality for sparse-dense computations and improves the index matching search process for sparse computations. As a result, VIA achieves significant performance speedup over highly optimized state-of-the-art C++ algebra libraries. On average, VIA outperforms sparse matrix vector multiplication, sparse matrix addition and sparse matrix matrix multiplication kernels by 4.22 ×, 6.14 × and 6.00 ×, respectively, when evaluated over a thousand sparse matrices that arise in real applications. In addition, we prove the generality of VIA by showing that it can accelerate histogram and stencil applications by 4.5 × and 3.5 ×, respectively.
El presente artículo de revisión, surge de un marcado interés por analizar los procesos de innovación disruptiva como una estrategia de inclusión hacia el emprendimiento social; en cuanto al ...desarrollo metodológico, se realizó una revisión documental de forma sistemática en bases indexadas, con el fin de consultar e identificar información pertinente y de interés en la recopilación y construcción de los referentes teóricos necesarios para dar respuesta a la siguiente pregunta que guía la investigación: ¿Por qué analizar los procesos de innovación disruptiva como una estrategia de inclusión hacia el emprendimiento social? La investigación tiene un enfoque de tipo documental cualitativo, con alcance descriptivo - explicativo, y diseño basado en la teoría fundamentada.
Resumen La planeación es un proceso estratégico para la consecución de resultados al interior de las instituciones. En la dirección pública, las entidades territoriales tienen el derecho y la ...obligación de organizar y desarrollar una serie de estrategias que les permitan alcanzar mayores niveles de calidad de vida para sus comunidades. Con base en un estudio cuantitativo se desarrolla un diseño de experimentos con cuatro factores que miden el componente de la gestión realizada en 1121 municipios del país en el 2017; entendiendo las oportunidades y orientaciones que requieren los alcaldes para mejorar el índice de desempeño municipal, mediante análisis factorial del componente de gestión del nuevo índice de medición municipal en Colombia. El análisis factorial es una técnica multivariada que permite reducir el “tamaño'' de un problema sin “demasiada pérdida de información”, es decir, este análisis (análisis de factores comunes o componentes principales) procura reducir la data que nos suministra una matriz de correlaciones para hacerla más expeditamente interpretable. Se procura descubrir una contestación a la interrogante ¿Por qué unas variables se relacionan más entre sí y menos con otras?. La información utilizada es fuente del Departamento Nacional de Planeación (DANE) y fundamentado en el estudio multivariado se concluye que las alcaldías requieren trabajar principalmente en ejecutar recursos, en el ordenamiento territorial, así como en el gobierno abierto y transparente.
Genome sequence analysis is fundamental to medical breakthroughs such as developing vaccines, enabling genome editing, and facilitating personalized medicine. The exponentially expanding sequencing ...datasets and complexity of sequencing algorithms necessitate performance enhancements. While the performance of software solutions is constrained by their underlying hardware platforms, the utility of fixed-function accelerators is restricted to only certain sequencing algorithms.This paper presents QUETZAL, the first general-purpose vector acceleration framework designed for high efficiency and broad applicability across a diverse set of genomics algorithms. While a commercial CPU's vector datapath is a promising candidate to exploit the data-level parallelism in genomics algorithms, our analysis finds that its performance is often limited due to long-latency scatter/gather memory instructions. QUETZAL introduces a hardware-software co-design comprising an accelerator microarchitecture closely integrated with the CPU's vector datapath, alongside novel vector instructions to fully capitalize on the proposed hardware. QUETZAL integrates a set of scratchpad-style buffers meticulously designed to minimize latency associated with scatter/gather instructions during the retrieval of input genome sequences data. QUETZAL supports both short and long reads, and different types of sequencing data formats. A combination of hardware and software techniques enables QUETZAL to reduce the latency of memory instructions, perform complex computation using a single instruction, and transform data representations at runtime, resulting in overall efficiency gain. QUETZAL significantly accelerates a vectorized CPU baseline on modern genome sequence analysis algorithms by 5.7×, while incurring a small area overhead of 1.4% post place-and-route at the 7nm technology node compared to an HPC ARM CPU.
La Asociación Colombiana de Osteoporosis y Metabolismo Mineral se reunió a principios de 2017 para actualizar el Consenso Colombiano de Osteoporosis, elaborado por primera vez en 2005, un paso que se ...consideró necesario en vista del subdiagnóstico de esta enfermedad, el impacto esperado del envejecimiento poblacional y los cambios en el tratamiento farmacológico que ha habido desde entonces. Se seleccionó un equipo técnico con especialistas de múltiples áreas y amplia trayectoria, repartidos en 4 grupos de trabajo: definición y epidemiología, diagnóstico, tratamiento farmacológico y medidas no farmacológicas. Luego de una revisión de la literatura científica, en reuniones de trabajo se generaron las definiciones y recomendaciones que se resumen en este documento.
The Colombian Osteoporosis and Mineral Metabolism Association met in early 2017 to update the Colombian Consensus on Osteoporosis. This was first issued in 2005, and is seen as a necessary step in view of the underdiagnosed status of this disease, and the expected impact of population ageing. A technical team was formed with specialists with long experience across multiple disciplines, who were assigned to four working groups: definitions and epidemiology, diagnosis, pharmacological treatment, and non-pharmacological treatment. After a scientific literature review and a series of meetings, the definitions and recommendations are summarised in this article.