Purpose/significance Technology competition is a powerful weapon for enterprises to maintain their advantages in the new market environment. The fundamental purpose of this paper is to construct a ...framework suitable for small and medium-sized enterprises to identify technological opportunities, so as to tap potential technological development opportunities for enterprises and make full use of Limited R&D resources to obtain technological breakthrough and innovation. Method/process Benchmarking analysis was used as the main method to select the competitor benchmarking of the target enterprise from the two dimensions of technical proximity and technical capability. Combined with the situation of benchmarking enterprises and the overall technical situation of the industry, potential technology categories were divided, and a three-dimensional patent technology/function matrix was constructed to identify technical opportunities, and the vacuum cleaner industry was taken as an example to verify. Result/conclus
Existing formats for Sparse MatrixaVector Multiplication (SpMV) on the GPU are outperforming their corresponding implementations on multi-core CPUs. In this paper, we present a new format called ...Sliced COO (SCOO) and an efficient CUDA implementation to perform SpMV on the GPU using atomic operations. We compare SCOO performance to existing formats of the NVIDIA Cusp library using large sparse matrices. Our results for single-precision floating-point matrices show that SCOO outperforms the COO and CSR format for all tested matrices and the HYB format for all tested unstructured matrices on a single GPU. Furthermore, our dual-GPU implementation achieves an efficiency of 94% on average. Due to the lower performance of existing CUDA-enabled GPUs for atomic operations on double-precision floating-point numbers the SCOO implementation for double-precision does not consistently outperform the other formats for every unstructured matrix. Overall, the average speedup of SCOO for the tested benchmark dataset is 3.33 (1.56) compared to CSR, 5.25 (2.42) compared to COO, 2.39 (1.37) compared to HYB for single (double) precision on a Tesla C2075. Furthermore, comparison to a Sandy-Bridge CPU shows that SCOO on a Fermi GPU outperforms the multi-threaded CSR implementation of the Intel MKL Library on an i7-2700 K by a factor between 5.5 (2.3) and 18 (12.7) for single (double) precision.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK
This work presents a study of relative efficiency of some ports using the Data Development Analysis (DEA). During the work, some ports chosen for the study are presented, as well as defined ...variables, inputs and outputs for modeling, and the mathematic model based on linear programming. Next, the relative efficiency obtained in each port is presented, and a comparison among the ports is carried out from the benchmarking, proposing changes in order to optimize port operations.
This book covers recent advances in efficiency evaluations, most notably Data Envelopment Analysis (DEA) and Stochastic Frontier Analysis (SFA) methods. It introduces the underlying theories, shows ...how to make the relevant calculations and discusses applications. The aim is to make the reader aware of the pros and cons of the different methods and to show how to use these methods in both standard and non-standard cases. Several software packages have been developed to solve some of the most common DEA and SFA models. This book relies on R, a free, open source software environment for statistical computing and graphics. This enables the reader to solve not only standard problems, but also many other problem variants. Using R, one can focus on understanding the context and developing a good model. One is not restricted to predefined model variants and to a one-size-fits-all approach. To facilitate the use of R, the authors have developed an R package called Benchmarking, which implements the main methods within both DEA and SFA. The book uses mathematical formulations of models and assumptions, but it de-emphasizes the formal proofs - in part by placing them in appendices -- or by referring to the original sources. Moreover, the book emphasizes the usage of the theories and the interpretations of the mathematical formulations. It includes a series of small examples, graphical illustrations, simple extensions and questions to think about. Also, it combines the formal models with less formal economic and organizational thinking. Last but not least it discusses some larger applications with significant practical impacts, including the design of benchmarking-based regulations of energy companies in different European countries, and the development of merger control programs for competition authorities.
Full text
Available for:
FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NUK, OBVAL, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
The aim of this paper is to develop a fully discrete (T, psi )- psi sub(e) finite element decoupled scheme to solve time-dependent eddy current problems with multiply-connected conductors. By making ...'cuts' and setting jumps of psi sub(e) across the cuts in nonconductive domain, the uniqueness of psi sub()is guaranteed. Distinguished from the traditional T- psi method, our decoupled scheme solves the potentials T and psi - psi sub(e) separately in two different simple equation systems, which avoids solving a saddle-point equation system and leads to a remarkable reduction in computational efforts. The energy-norm error estimate of the fully discrete decoupled scheme is provided. Finally, the scheme is applied to solve two benchmark problems-TEAM Workshop Problems 7 and IEEJ model. Copyright copyright 2013 John Wiley & Sons, Ltd.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SBCE, SBMB, UL, UM, UPUK
Structural neuroimaging data have been used to compute an estimate of the biological age of the brain (brain‐age) which has been associated with other biologically and behaviorally meaningful ...measures of brain development and aging. The ongoing research interest in brain‐age has highlighted the need for robust and publicly available brain‐age models pre‐trained on data from large samples of healthy individuals. To address this need we have previously released a developmental brain‐age model. Here we expand this work to develop, empirically validate, and disseminate a pre‐trained brain‐age model to cover most of the human lifespan. To achieve this, we selected the best‐performing model after systematically examining the impact of seven site harmonization strategies, age range, and sample size on brain‐age prediction in a discovery sample of brain morphometric measures from 35,683 healthy individuals (age range: 5–90 years; 53.59% female). The pre‐trained models were tested for cross‐dataset generalizability in an independent sample comprising 2101 healthy individuals (age range: 8–80 years; 55.35% female) and for longitudinal consistency in a further sample comprising 377 healthy individuals (age range: 9–25 years; 49.87% female). This empirical examination yielded the following findings: (1) the accuracy of age prediction from morphometry data was higher when no site harmonization was applied; (2) dividing the discovery sample into two age‐bins (5–40 and 40–90 years) provided a better balance between model accuracy and explained age variance than other alternatives; (3) model accuracy for brain‐age prediction plateaued at a sample size exceeding 1600 participants. These findings have been incorporated into CentileBrain (https://centilebrain.org/#/brainAGE2), an open‐science, web‐based platform for individualized neuroimaging metrics.
In this work, we developed and empirically validated sex‐specific brain‐age models to cover most of the human lifespan (5–90 years). Specifically, we selected the best‐performing model after systematically examining the impact of seven site harmonization strategies, age range, and sample size on brain‐age prediction in a discovery sample of brain morphometric measures from 35,683 healthy individuals. The pre‐trained models were tested for cross‐dataset generalizability in an independent sample comprising 2101 healthy individuals and for longitudinal consistency in a further independent sample comprising 377 healthy individuals.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SBCE, SBMB, UL, UM, UPUK
WHAT IS BENCHMARKING? CEAUȘESCU IONUT
Analele Universităţii Constantin Brâncuşi din Târgu Jiu : Seria Economie,
04/2022
2
Journal Article
Peer reviewed
Open access
Benchmarking is an activity that consists of comparing the own practices of the organization with those of other organizations. Benchmarking is the process by which the best methods used in an ...economic activity are sought, these methods allowing the company to improve its performance..