Purpose/significance Technology competition is a powerful weapon for enterprises to maintain their advantages in the new market environment. The fundamental purpose of this paper is to construct a ...framework suitable for small and medium-sized enterprises to identify technological opportunities, so as to tap potential technological development opportunities for enterprises and make full use of Limited R&D resources to obtain technological breakthrough and innovation. Method/process Benchmarking analysis was used as the main method to select the competitor benchmarking of the target enterprise from the two dimensions of technical proximity and technical capability. Combined with the situation of benchmarking enterprises and the overall technical situation of the industry, potential technology categories were divided, and a three-dimensional patent technology/function matrix was constructed to identify technical opportunities, and the vacuum cleaner industry was taken as an example to verify. Result/conclus
Existing formats for Sparse MatrixaVector Multiplication (SpMV) on the GPU are outperforming their corresponding implementations on multi-core CPUs. In this paper, we present a new format called ...Sliced COO (SCOO) and an efficient CUDA implementation to perform SpMV on the GPU using atomic operations. We compare SCOO performance to existing formats of the NVIDIA Cusp library using large sparse matrices. Our results for single-precision floating-point matrices show that SCOO outperforms the COO and CSR format for all tested matrices and the HYB format for all tested unstructured matrices on a single GPU. Furthermore, our dual-GPU implementation achieves an efficiency of 94% on average. Due to the lower performance of existing CUDA-enabled GPUs for atomic operations on double-precision floating-point numbers the SCOO implementation for double-precision does not consistently outperform the other formats for every unstructured matrix. Overall, the average speedup of SCOO for the tested benchmark dataset is 3.33 (1.56) compared to CSR, 5.25 (2.42) compared to COO, 2.39 (1.37) compared to HYB for single (double) precision on a Tesla C2075. Furthermore, comparison to a Sandy-Bridge CPU shows that SCOO on a Fermi GPU outperforms the multi-threaded CSR implementation of the Intel MKL Library on an i7-2700 K by a factor between 5.5 (2.3) and 18 (12.7) for single (double) precision.
This work presents a study of relative efficiency of some ports using the Data Development Analysis (DEA). During the work, some ports chosen for the study are presented, as well as defined ...variables, inputs and outputs for modeling, and the mathematic model based on linear programming. Next, the relative efficiency obtained in each port is presented, and a comparison among the ports is carried out from the benchmarking, proposing changes in order to optimize port operations.
The aim of this paper is to develop a fully discrete (T, psi )- psi sub(e) finite element decoupled scheme to solve time-dependent eddy current problems with multiply-connected conductors. By making ...'cuts' and setting jumps of psi sub(e) across the cuts in nonconductive domain, the uniqueness of psi sub()is guaranteed. Distinguished from the traditional T- psi method, our decoupled scheme solves the potentials T and psi - psi sub(e) separately in two different simple equation systems, which avoids solving a saddle-point equation system and leads to a remarkable reduction in computational efforts. The energy-norm error estimate of the fully discrete decoupled scheme is provided. Finally, the scheme is applied to solve two benchmark problems-TEAM Workshop Problems 7 and IEEJ model. Copyright copyright 2013 John Wiley & Sons, Ltd.
Structural neuroimaging data have been used to compute an estimate of the biological age of the brain (brain‐age) which has been associated with other biologically and behaviorally meaningful ...measures of brain development and aging. The ongoing research interest in brain‐age has highlighted the need for robust and publicly available brain‐age models pre‐trained on data from large samples of healthy individuals. To address this need we have previously released a developmental brain‐age model. Here we expand this work to develop, empirically validate, and disseminate a pre‐trained brain‐age model to cover most of the human lifespan. To achieve this, we selected the best‐performing model after systematically examining the impact of seven site harmonization strategies, age range, and sample size on brain‐age prediction in a discovery sample of brain morphometric measures from 35,683 healthy individuals (age range: 5–90 years; 53.59% female). The pre‐trained models were tested for cross‐dataset generalizability in an independent sample comprising 2101 healthy individuals (age range: 8–80 years; 55.35% female) and for longitudinal consistency in a further sample comprising 377 healthy individuals (age range: 9–25 years; 49.87% female). This empirical examination yielded the following findings: (1) the accuracy of age prediction from morphometry data was higher when no site harmonization was applied; (2) dividing the discovery sample into two age‐bins (5–40 and 40–90 years) provided a better balance between model accuracy and explained age variance than other alternatives; (3) model accuracy for brain‐age prediction plateaued at a sample size exceeding 1600 participants. These findings have been incorporated into CentileBrain (https://centilebrain.org/#/brainAGE2), an open‐science, web‐based platform for individualized neuroimaging metrics.
In this work, we developed and empirically validated sex‐specific brain‐age models to cover most of the human lifespan (5–90 years). Specifically, we selected the best‐performing model after systematically examining the impact of seven site harmonization strategies, age range, and sample size on brain‐age prediction in a discovery sample of brain morphometric measures from 35,683 healthy individuals. The pre‐trained models were tested for cross‐dataset generalizability in an independent sample comprising 2101 healthy individuals and for longitudinal consistency in a further independent sample comprising 377 healthy individuals.
WHAT IS BENCHMARKING? CEAUȘESCU IONUT
Analele Universităţii Constantin Brâncuşi din Târgu Jiu : Seria Economie,
04/2022
2
Journal Article
Peer reviewed
Open access
Benchmarking is an activity that consists of comparing the own practices of the organization with those of other organizations. Benchmarking is the process by which the best methods used in an ...economic activity are sought, these methods allowing the company to improve its performance..
Six years and more than seventy publications later this paper looks back and analyzes the development of prognostic algorithms using C-MAPSS datasets generated and disseminatedby the prognostic ...center of excellence at NASA Ames Research Center. Among those datasets are five run-to-failure CMAPSS datasets that have been popular due to various characteristicsapplicable to prognostics. The C-MAPSS datasets pose several challenges that are inherent to general prognostics applications. In particular, management of high variability due to sensor noise, effects of operating conditions, and presence of multiple simultaneous fault modes are some factors that have great impact on the generalization capabilities of prognostics algorithms. More than seventy publications have used the C-MAPSS datasets for developing datadriven prognostic algorithms. However, in the absence of performance benchmarking results and due to common misunderstandings in interpreting the relationships between these datasets, it has been difficult for the users to suitably compare their results. In addition to identifying differentiating characteristics in these datasets, this paper also provides performance results for the PHM’08 data challenge wining entries to serve as performance baseline. This paper summarizes various prognostic modeling efforts that used C-MAPSS datasets and provides guidelines and references to further usage of these datasets in a manner that allows clear and consistent comparison between different approaches.