With the advancement of technology scaling, multi/many-core platforms are getting more attention in embedded systems due to the ever-increasing performance requirements and power efficiency. This ...feature size scaling, along with architectural innovations, has dramatically exacerbated the rate of manufacturing defects and physical fault-rates. As a result, in addition to providing high parallelism, such hardware platforms have introduced increasing unreliability into the system. Such systems need to be well designed to ensure long-term and application-specific reliability, especially in mixed-criticality systems, where incorrect execution of applications may cause catastrophic consequences. However, the optimal allocation of applications/tasks on multi/many-core platforms is an increasingly complex problem. Therefore, reliability-aware resource management is crucial while ensuring the application-specific Quality-of-Service (QoS) requirements and optimizing other system-level performance goals. This article presents a survey of recent works that focus on reliability-aware resource management in multi-/many-core systems. We first present an overview of reliability in electronic systems, associated fault models and the various system models used in related research. Then, we present recent published articles primarily focusing on aspects such as application-specific reliability optimization, mixed-criticality awareness, and hardware resource heterogeneity. To underscore the techniques’ differences, we classify them based on the design space exploration. In the end, we briefly discuss the upcoming trends and open challenges within the domain of reliability-aware resource management for future research.
Contemporary hardware implementations of artificial neural networks face the burden of excess area requirement due to resource-intensive elements such as multiplier and non-linear activation ...functions. The present work addresses this challenge by proposing a resource-efficient Co-ordinate Rotation Digital Computer (CORDIC)-based neuron architecture (RECON) which can be configured to compute both multiply-accumulate (MAC) and non-linear activation function (AF) operations. The CORDIC-based architecture uses linear and trigonometric relationships to realize MAC and AF operations respectively. The proposed design is synthesized and verified at 45nm technology using Cadence Virtuoso for all physical parameters. Implementation of the signed fixed-point 8-bit MAC using our design, shows 60% less area, latency, and power product (ALP) and shows improvement by 38% in area, 27% in power dissipation, and 15% in latency with respect to the state-of-the-art MAC design. Further, Monte-Carlo simulations for process-variations and device-mismatch are performed for both the proposed model and the state-of-the-art to evaluate expectations of functions of randomness in dynamic power variation. The dynamic power variation for our design shows that worst-case mean is <inline-formula> <tex-math notation="LaTeX">189.73\mu W </tex-math></inline-formula> which is 63% of the state-of-the-art.
Machine learning helps construct predictive models in clinical data analysis, predicting stock prices, picture recognition, financial modelling, disease prediction, and diagnostics. This paper ...proposes machine learning ensemble algorithms to forecast diabetes. The ensemble combines k-NN, Naive Bayes (Gaussian), Random Forest (RF), Adaboost, and a recently designed Light Gradient Boosting Machine. The proposed ensembles inherit detection ability of LightGBM to boost accuracy. Under fivefold cross-validation, the proposed ensemble models perform better than other recent models. The
k
-NN, Adaboost, and LightGBM jointly achieve 90.76% detection accuracy. The receiver operating curve analysis shows that
k
-NN, RF, and LightGBM successfully solve class imbalance issue of the underlying dataset.
To catalog protein-altering mutations that may drive the development of prostate cancers and their progression to metastatic disease systematically, we performed whole-exome sequencing of 23 prostate ...cancers derived from 16 different lethal metastatic tumors and three high-grade primary carcinomas. All tumors were propagated in mice as xenografts, designated the LuCaP series, to model phenotypic variation, such as responses to cancer-directed therapeutics. Although corresponding normal tissue was not available for most tumors, we were able to take advantage of increasingly deep catalogs of human genetic variation to remove most germline variants. On average, each tumor genome contained ∼200 novel nonsynonymous variants, of which the vast majority was specific to individual carcinomas. A subset of genes was recurrently altered across tumors derived from different individuals, including TP53, DLK2, GPC6, and SDF4. Unexpectedly, three prostate cancer genomes exhibited substantially higher mutation frequencies, with 2,000–4,000 novel coding variants per exome. A comparison of castration-resistant and castration-sensitive pairs of tumor lines derived from the same prostate cancer highlights mutations in the Wnt pathway as potentially contributing to the development of castration resistance. Collectively, our results indicate that point mutations arising in coding regions of advanced prostate cancers are common but, with notable exceptions, very few genes are mutated in a substantial fraction of tumors. We also report a previously undescribed subtype of prostate cancers exhibiting "hypermutated" genomes, with potential implications for resistance to cancer therapeutics. Our results also suggest that increasingly deep catalogs of human germline variation may challenge the necessity of sequencing matched tumor-normal pairs.
Biomedical engineers prefer decision forests over traditional decision trees to design state-of-the-art Parkinson’s Detection Systems (PDS) on massive acoustic signal data. However, the challenges ...that the researchers are facing with decision forests is identifying the minimum number of decision trees required to achieve maximum detection accuracy with the lowest error rate. This article examines two recent decision forest algorithms Systematically Developed Forest (SysFor), and Decision Forest by Penalizing Attributes (ForestPA) along with the popular Random Forest to design three distinct Parkinson’s detection schemes with optimum number of decision trees. The proposed approach undertakes minimum number of decision trees to achieve maximum detection accuracy. The training and testing samples and the density of trees in the forest are kept dynamic and incremental to achieve the decision forests with maximum capability for detecting Parkinson’s Disease (PD). The incremental tree densities with dynamic training and testing of decision forests proved to be a better approach for detection of PD. The proposed approaches are examined along with other state-of-the-art classifiers including the modern deep learning techniques to observe the detection capability. The article also provides a guideline to generate ideal training and testing split of two modern acoustic datasets of Parkinson’s and control subjects donated by the Department of Neurology in Cerrahpaşa, Istanbul and Departamento de Matemáticas, Universidad de Extremadura, Cáceres, Spain. Among the three proposed detection schemes the Forest by Penalizing Attributes (ForestPA) proved to be a promising Parkinson’s disease detector with a little number of decision trees in the forest to score the highest detection accuracy of 94.12% to 95.00%.
Conventional transmission line protection algorithms may experience delay or even mal‐operation in presence of shunt compensation due to various non‐fault transient disturbances. This paper presents ...a time‐domain differential protection technique based on Kullback–Leibler divergence with thresholding logic for mid‐point static synchronous compensator compensated transmission system. A detection index for each phase currents are computed in order to discrimination the fault and non‐fault scenarios. Different structure of power systems modelled using EMTDC/PSCAD are simulated to generate numerous numbers of test cases in order to evaluate the applicability of the proposed differential protection logic. Proposed method is also efficient to differentiate the faults during various non‐fault events like capacitor switching, sudden load change, power swing, and current transformer saturation including external fault cases. Results and comparative assessment with recently proposed time‐frequency and commercial current differential relaying reports demonstrate the efficacy and robustness of the proposed technique. The results confirmed the accurate operation of the proposed protection technique.
The gut microbiota is critical for maintaining human health and the immunological system. Several neuroscientific studies have shown the significance of microbiota in developing brain systems. The ...gut microbiota and the brain are interconnected in a bidirectional relationship, as research on the microbiome-gut-brain axis shows. Significant evidence links anxiety and depression disorders to the community of microbes that live in the gastrointestinal system. Modified diet, fish and omega-3 fatty acid intake, macro- and micro-nutrient intake, prebiotics, probiotics, synbiotics, postbiotics, fecal microbiota transplantation, and 5-HTP regulation may all be utilized to alter the gut microbiota as a treatment approach. There are few preclinical and clinical research studies on the effectiveness and reliability of various therapeutic approaches for depression and anxiety. This article highlights relevant research on the association of gut microbiota with depression and anxiety and the different therapeutic possibilities of gut microbiota modification.
In response to the escalating demand for hardware-efficient Deep Neural Network (DNN) architectures, we present a novel quantize-enabled multiply-accumulate (MAC) unit. Our methodology employs a ...right shift-and-add computation for MAC operation, enabling runtime truncation without additional hardware. This architecture optimally utilizes hardware resources, enhancing throughput performance while reducing computational complexity through bit-truncation techniques. Our key methodology involves designing a hardware-efficient MAC computational algorithm that supports both iterative and pipeline implementations, catering to diverse hardware efficiency or enhanced throughput requirements in accelerators. Additionally, we introduce a processing element (PE) with a pre-loading bias scheme, reducing one clock delay and eliminating the need for conventional extra resources in PE implementation. The PE facilitates quantization-based MAC calculations through an efficient bit-truncation method, removing the necessity for extra hardware logic. This versatile PE accommodates variable bit-precision with a dynamic fraction part within the sfxpt < N,f > representation, meeting specific model or layer demands. Through software emulation, our proposed approach demonstrates minimal accuracy loss, revealing under 1.6% loss for LeNet-5 using MNIST and around 4% for ResNet-18 and VGG-16 with CIFAR-10 in the sfxpt <8,5> format compared to conventional float32-based implementations. Hardware performance parameters on the Xilinx-Virtex-7 board unveil a 37% reduction in area utilization and a 45% reduction in power consumption compared to the best state-of-the-art MAC architecture. Extending the proposed MAC to a LeNet DNN model results in a 42% reduction in resource requirements and a significant 27% reduction in delay. This architecture provides notable advantages for resource-efficient, high-throughput edge-AI applications.
Naïve simulated additive white Gaussian noise (AWGN) may not fully characterize the complexity of real world noisy images. Owing to optimal sparsity in image representation, we propose a curvelet ...based model for denoising real-world RGB images. Initially, the image is decomposed in three curvelet scales, namely: the approximation scale (that retains low-frequency information), the coarser scale and the finest scale (that preserves high-frequency components). Coefficients in the approximation and finest scale are estimated using NLM filter, while a scale dependent threshold is adopted for signal estimation in the coarser scale. The reconstructed image in spatial domain is further processed using Guided Image Filter (GIF) to suppress the ringing artifacts due to curvelet thresholding. The proposed approach known as CTuNLM method is extended for color image denoising using uncorrelated YUV color space. Extensive experiments on multi-channel real noisy images are conducted in comparison with eight sate-of-the-art methods. With four encouraging qualitative and quantitative measures including PSNR and SSIM, we found that CTuNLM method achieves better denoising performance in terms of noise reduction and detail preservation. We further examined the potential of proposed approach by focusing only on the Finest scale curvelet Coefficients (FC). Features like small details, edges and textures always add up to improve the overall denoising performance, while minimizing spurious details. We studied "The Curious Case of the Finest Scale" and constructed "Deep Curvelet-Net": an encoder-decoder-based CNN architecture, as a pilot work. The encoder uses multiscale spatial characteristics from noisy FC, while the decoder processes denoised FC under the supervision of encoder's multiscale spatial attention map. The "Deep Curvelet-Net" links encoder multiscale feature modeling with decoder spatial attention supervision to learn the most essential features for denoising. The CNN-based architecture only estimates FC, while all other CTuNLM stages are left unchanged to produce the denoised output. Results presented in this article validated the design of proposed CNN architecture in curvelet domain and motivated us to search beyond classical thresholding and/or filtering approaches.
Recently, new advancements in technologies have promoted the classification of brain tumors at the early stages to reduce mortality and disease severity. Hence, there is a need for an automatic ...classification model to automatically segment and classify the tumor regions, which supports researchers and medical practitioners without the need for any expert knowledge. Thus, this research proposes a novel framework called the scatter sharp optimization-based correlation-driven deep CNN model (SSO-CCNN) for classifying brain tumors. The implication of this research is based on the growth of the optimized correlation-enabled deep model, which classifies the tumors using the optimized segments acquired through the developed sampled progressively growing generative adversarial networks (sampled PGGANs). The hyperparameter training is initiated through the designed SSO optimization that is developed by combining the features of the global and local searching phase of flower pollination optimization as well as the adaptive automatic solution convergence of sunflower optimization for precise consequences. The recorded accuracy, sensitivity, and specificity of the SSO-CCNN classification scheme are 97.41%, 97.89%, and 96.93%, respectively, using the brain tumor dataset. In addition, the execution latency was found to be 1.6 s. Thus, the proposed framework can be beneficial to medical experts in tracking and assessing symptoms of brain tumors reliably.