Efforts to advance the use of analog SRAM compute-in-memory (SRAM-CIM) macros for high-precision multiply-and-accumulate (MAC) operations must deal with issues pertaining to energy efficiency, ...computing latency (TAC), and area overhead. This brief presents a novel SRAM-CIM structure that utilizes (1) a high input precision computing cell (HIPCC) to perform 8b-MAC operations with high multiplication throughput, and (2) a global bitline-combining (GBL-comb) scheme to improve energy efficiency by reducing the number of analog-to-digital converters (ADCs). A 28nm 384-kb SRAM-CIM macro with 20-bit output precision (near-full precision) was fabricated using a foundry-provided 28nm logic process for MAC operations with 8b-input, 8b-weight, and 16 accumulations. The resulting macro achieved a <inline-formula> <tex-math notation="LaTeX">T_{\mathrm{ AC}} </tex-math></inline-formula> of 3.6 ns with energy efficiency of 14.97 TOPS/W when applied to 8-bit MAC operations.
The computing-in-memory (CIM) technique is emerging with the evolvement of big data and artificial intelligence (AI) application. The manuscript presents a systematic review of existing CIM works in ...a bottom-up view from circuit to application. Various types of CIM circuits based on different volatile/nonvolatile devices are introduced. The micro CIM architectures are illustrated to support multibit precision computation. After that, several types of processor-level CIM chips are analyzed to reveal the system architecture design considerations. The corresponding CIM tool chains and applications beyond AI applications are also introduced. From circuit to application levels, this manuscript analyzes the design tradeoffs, remained challenges, and possible future design trends at different design hierarchies of CIM processors.
The major challenge faced by modern compute-in-memory (CIM) designs is that they rely heavily on mixed-signal data converters such as digital-to-analog converters (DACs) and analog-to-digital ...converters (ADCs) that contribute to <inline-formula> <tex-math notation="LaTeX">\sim</tex-math> </inline-formula>15% area and <inline-formula> <tex-math notation="LaTeX">\sim</tex-math> </inline-formula>50% energy of the overall macro and are susceptible to non-linearities, leakage, and process variations, which causes deep neural network (DNN) inference/training accuracy loss. As DNN models increase in size, the number of DACs steps required per inference increases exponentially. This work proposes a four-pronged approach to address the challenges in CIM designs: 1) binary-weighted-bitline-precharge scheme utilizing dedicated reference voltages to perform input bit-serial multiplication in the charge domain, eliminating the need for dedicated DAC circuits; 2) leakage-tolerant, input-dependent-bitline-keeper circuits that maintain the local-bitline voltages; 3) hybrid-charge-sharing-based integrating-ADCs to improve the ADC conversion time by leveraging the reference voltages, thereby improving ADC latency while achieving a compact ADC design; and 4) efficient data movement and utilization of analog-to-digital co-computation. Fabricated in TSMC 65 nm, the experimental results for the compute-in-static-RAM (CISRAM) silicon prototype show an average macro energy efficiency of 153-2453.76 TOPs/W. The average macro energy efficiency is 2.3<inline-formula> <tex-math notation="LaTeX">\times</tex-math> </inline-formula> more than the latest state of the art mentioned in the comparison table.
In this work, fluctuation patterns of ReRAM current are classified automatically by proposed fluctuation pattern classifier (FPC). FPC is trained with artificially created dataset to overcome the ...difficulties of measured current signals, including the annotation cost and imbalanced data amount. Using FPC, fluctuation occurrence under different write conditions is analyzed for both HRS and LRS current. Based on the measurement and classification results, physical models of fluctuations are established.
Background and Aim: Different phenotypic methods are available for identification of pseudomonas aeroginosa isolates producing carbapenemase enzymes. Carbapenem inactivation method (CIM) is a fast ...and inexpensive way for detection of this enzyme. The purpose of this study was to evaluate the CIM method for accurate identification of carbapenemase producing pseudomonas aeruginosa isolates. Materials and Methods: A total of 97 clinical specimens were collected from the patients in the hospitals of Hamadan from November 2017 to May 2018, in Iran. Antibiotic susceptibility test was performed by disc diffusion method. Minimum inhibitory concentration (MIC) for imipenem was measured by E-test. Then, CIM test and polymerase chain reaction (PCR) methods were performed. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of the CIM test were calculated for each of the genes. Using SPSS16 software, significance of CIM test was evaluated by chi-square test (X2). Results: In this study, the highest and lowest levels of resistance belonged to cefoxitin 91 (93.8%) and piperacilin/tazobactam 38 (39.2%). Among 97 P. aeruginosa clinical isolates, 49 (50.51%) were carbapenemase producer with positive results for CIM test in 44 (89.7%) isolates, and negative results for CIM test in 48 (49.48%) isolates. Therefore, the sensitivity and specificity of the CIM test were 90% and 100%, respectively. Conclusions: According to the results of this study CIM method is an inexpensive test which can be easily performed and has high sensitivity and specificity for identification of carbapenemase producing P. aeruginosa isolates.
Background and Aim: Different phenotypic methods are available for identification of pseudomonas aeroginosa isolates producing carbapenemase enzymes. Carbapenem inactivation method (CIM) is a fast ...and inexpensive way for detection of this enzyme. The purpose of this study was to evaluate the CIM method for accurate identification of carbapenemase producing pseudomonas aeruginosa isolates. Materials and Methods: A total of 97 clinical specimens were collected from the patients in the hospitals of Hamadan from November 2017 to May 2018, in Iran. Antibiotic susceptibility test was performed by disc diffusion method. Minimum inhibitory concentration (MIC) for imipenem was measured by E-test. Then, CIM test and polymerase chain reaction (PCR) methods were performed. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of the CIM test were calculated for each of the genes. Using SPSS16 software, significance of CIM test was evaluated by chi-square test (X2). Results: In this study, the highest and lowest levels of resistance belonged to cefoxitin 91 (93.8%) and piperacilin/tazobactam 38 )39.2%). Among 97 P. aeruginosa clinical isolates, 49 (50.51%) were carbapenemase producer with positive results for CIM test in 44 (89.7%) isolates, and negative results for CIM test in 48 (49.48%) isolates. Therefore, the sensitivity and specificity of the CIM test were 90% and 100%, respectively. Conclusions: According to the results of this study CIM method is an inexpensive test which can be easily performed and has high sensitivity and specificity for identification of carbapenemase producing P. aeruginosa isolates.
•Sensor-packed manufacturing systems will become ubiquitous.•Cybersecurity aspects are gaining importance within the manufacturing domain.•Manufacturing cyber-physical systems are expected to follow ...the trend set by other domains that benefited from the Internet of Things, Cloud Computing, and Big Data.•Outcomes of the implementation of manufacturing cyber-physical systems could be transformative to the extent that predictive manufacturing systems can become a reality.
The recent advances in sensor and communication technologies can provide the foundations for linking the physical manufacturing facility and machine world to the cyber world of Internet applications. The coupled manufacturing cyber-physical system is envisioned to handle the actual operations in the physical world while simultaneously monitor them in the cyber world with the help of advanced data processing and simulation models at both the manufacturing process and system operational levels. Moreover, a sensor-packed manufacturing system in which each process or piece of equipment makes available event and status information, coupled with market research for true advanced Big Data analytics, seem to be the right ingredients for event response selection and operation virtualization. As a drawback, the resulting manufacturing cyber-physical system will be vulnerable to the inevitable cyber-attacks, unfortunately, so common for the software and Internet-based systems. This reality makes cybersecurity penetration within the manufacturing domain a need that goes uncontested across researchers and practitioners. This work provides a review of the current status of virtualization and cloud-based services for manufacturing systems and of the use of Big Data analytics for planning and control of manufacturing operations. Building on already developed cloud business solutions, cloud manufacturing is expected to offer improved enterprise manufacturing and business decision support. Based on the current state-of-the-art cloud manufacturing solutions and Big Data applications, this work also proposes a framework for the development of predictive manufacturing cyber-physical systems that include capabilities for attaching to the Internet of Things, and capabilities for complex event processing and Big Data algorithmic analytics.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK, ZRSKP
The impact of MDA (Model Driven Architecture) in the software industry has become significant. However, many researchers are interested in this area because it focuses on productivity and reliability ...and it makes the production of software more automatic, faster, and easy to maintain. In this article, we present a case study of an ATM (Automated Teller Machine) project and we propose a methodology following the MDA approach using artificial intelligence and adapting machine learning algorithms to generate software specifications and source code from an interpretation of stakeholder responses to a series of questions that will be intelligently proposed.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
El estudio de la precipitación, su distribución y evolución temporal son de interés para proyectos de diseño hidrológico. Las curvas IDF es la metodología más utilizada para definir la tormenta de ...diseño en base a la relación entre la intensidad de la lluvia, la duración y la recurrencia. Su construcción muchas veces se ve limitada por la escasa disponibilidad de datos pluviográficos, referente a la cobertura espacial de las estaciones y a la longitud insuficiente de los registros. El presente trabajo tiene por objetivo actualizar las curvas IDF del Centro de Informaciones Meteorológicas (CIM) “Lic. Enrique B. Rodríguez”, Facultad de Ingeniería y Ciencias Hídricas, Universidad Nacional del Litoral (FICH–UNL). Las curvas IDF fueron calculadas por el método de Sherman y se verificó que la intensidad de precipitación decrece cuando aumenta la duración y para una misma duración, la intensidad aumenta con el periodo de retorno. Las intensidades menores e iguales a 10 minutos superan los 100 mm.h -1 para todas las recurrencias. Las curvas IDF determinadas para la estación del CIM presentan intensidades de lluvia menores a las definidas para las localidades de Rafaela y Paraná, con diferencias porcentuales que aumentan con la recurrencia.