Complexity management is an increasing challenge for industrial companies. To address this issue, this paper develops a procedure to reduce the complexity of products and processes. This procedure ...includes five steps: (1) definition of the scope of the products and processes to be included in the analysis, (2) grouping of products into A,B, and C categories, (3) identification and quantification of the most important complexity cost factors, (4) identification of initiatives for the possible reduction of complexity costs and the quantification of possible cost savings, and (5) evaluation and prioritisation of initiatives. To test the usefulness of the suggested procedure, it was applied at a globally leading manufacturer of mechanical consumer products. The case study demonstrated the usefulness of the proposed procedure in (1) supporting the allocation of complexity costs in relation to individual product variants, (2) achieving a better understanding of the cost structure of product assortment and business processes, and (3) providing a basis for generating and evaluating initiatives aimed at reducing the complexity of products and processes. The case study also showed that the use of the procedure can produce considerable financial benefits.
Assembly complexity assessment is a widely addressed topic in manufacturing. Several studies proved the correlation between assembly complexity and the occurrence of defects, thus justifying this ...increasing attention. A measure of complexity provides control over quality costs and performances. Over the years, many methods have been proposed to provide an objective measure of complexity. One of the most widely diffused is the so-called MCAT (i.e., “Manufacturing Complexity Assessment Tool”) modified by Samy and ElMaraghy H. for assessing product assembly complexity. Although this method highlights some interesting aspects, it presents some critical issues. This work aims to thoroughly analyse this method, focusing on its strengths and limitations.
High efficiency video coding (HEVC) significantly reduces bit rates over the preceding H.264 standard but at the expense of extremely high encoding complexity. In HEVC, the quad-tree partition of the ...coding unit (CU) consumes a large proportion of the HEVC encoding complexity, due to the brute-force search for rate-distortion optimization (RDO). Therefore, this paper proposes a deep learning approach to predict the CU partition for reducing the HEVC complexity at both intra-and inter-modes, which is based on convolutional neural network (CNN) and long- and short-term memory (LSTM) network. First, we establish a large-scale database including substantial CU partition data for the HEVC intra- and inter-modes. This enables deep learning on the CU partition. Second, we represent the CU partition of an entire coding tree unit in the form of a hierarchical CU partition map (HCPM). Then, we propose an early terminated hierarchical CNN (ETH-CNN) for learning to predict the HCPM. Consequently, the encoding complexity of intra-mode HEVC can be drastically reduced by replacing the brute-force search with ETH-CNN to decide the CU partition. Third, an ETH-LSTM is proposed to learn the temporal correlation of the CU partition. Then, we combine the ETH-LSTM and the ETH-CNN to predict the CU partition for reducing the HEVC complexity at inter-mode. Finally, experimental results show that our approach outperforms the other state-of-the-art approaches in reducing the HEVC complexity at both intra- and inter-modes.
DeepVCA: Deep Video Complexity Analyzer Amirpour, Hadi; Schoeffmann, Klaus; Ghanbari, Mohammad ...
IEEE transactions on circuits and systems for video technology,
2024
Journal Article
Peer reviewed
Open access
Video streaming and its applications are growing rapidly, making video optimization a primary target for content providers looking to enhance their services. Enhancing the quality of videos requires ...the adjustment of different encoding parameters such as bitrate, resolution, and frame rate. To avoid brute force approaches for predicting optimal encoding parameters, video complexity features are typically extracted and utilized. To predict optimal encoding parameters effectively, content providers traditionally use unsupervised feature extraction methods, such as ITU-T's Spatial Information ( SI ) and Temporal Information ( TI ) to represent the spatial and temporal complexity of video sequences. Recently, Video Complexity Analyzer (VCA) was introduced to extract DCT-based features to represent the complexity of a video sequence (or parts thereof). These unsupervised features, however, cannot accurately predict video encoding parameters. To address this issue, this paper introduces a novel supervised feature extraction method named DeepVCA, which extracts the spatial and temporal complexity of video sequences using deep neural networks. In this approach, the encoding bits required to encode each frame in intra-mode and inter-mode are used as labels for spatial and temporal complexity, respectively. Initially, we benchmark various deep neural network structures to predict spatial complexity. We then leverage the similarity of features used to predict the spatial complexity of the current frame and its previous frame to rapidly predict temporal complexity. This approach is particularly useful as the temporal complexity may depend not only on the differences between two consecutive frames but also on their spatial complexity. Our proposed approach demonstrates significant improvement over unsupervised methods, especially for temporal complexity. As an example application, we verify the effectiveness of these features in predicting the encoding bitrate and encoding time of video sequences, which are crucial tasks in video streaming. The source code and dataset is available at https://github.com/cd-athena/ DeepVCA.
Versatile Video Coding (VVC), as the latest standard, significantly improves the coding efficiency over its predecessor standard High Efficiency Video Coding (HEVC), but at the expense of sharply ...increased complexity. In VVC, the quad-tree plus multi-type tree (QTMT) structure of the coding unit (CU) partition accounts for over 97% of the encoding time, due to the brute-force search for recursive rate-distortion (RD) optimization. Instead of the brute-force QTMT search, this paper proposes a deep learning approach to predict the QTMT-based CU partition, for drastically accelerating the encoding process of intra-mode VVC. First, we establish a large-scale database containing sufficient CU partition patterns with diverse video content, which can facilitate the data-driven VVC complexity reduction. Next, we propose a multi-stage exit CNN (MSE-CNN) model with an early-exit mechanism to determine the CU partition, in accord with the flexible QTMT structure at multiple stages. Then, we design an adaptive loss function for training the MSE-CNN model, synthesizing both the uncertain number of split modes and the target on minimized RD cost. Finally, a multi-threshold decision scheme is developed, achieving a desirable trade-off between complexity and RD performance. The experimental results demonstrate that our approach can reduce the encoding time of VVC by 44.65%~66.88% with a negligible Bjøntegaard delta bit-rate (BD-BR) of 1.322%~3.188%, significantly outperforming other state-of-the-art approaches.
The research explores the historical development of project complexity. Projects are becoming more complex due to unexpected emergent behaviour and characteristics. Complexity has become an ...inseparable aspect of systems and also one of the important factors in the failure of projects. While much has been written about project complexity, there is still a lack of understanding of what constitutes project complexity. This research includes a systematic literature review to demonstrate the current understanding of commonalities and differences in the existing research. This was achieved by examining more than 420 published research papers, drawn from an original group of approximately 10,000, based on citations during the period of 1990–2015. As a result of this exploration, an integrative systemic framework is presented to demonstrate understanding of project complexity.
It was found that there are three primary and distinctive models of project complexity, the Project Management Institute view, the System of Systems view and the view developed from the analysis of citations of research papers, which is called the Complexity Theories view. Further testing is required on a range of complex projects in order to attempt to reconcile these views.
•Providing a clarification to the ontology/epistemology of project complexity (subjective & objective)•Exploring historical development of project complexity with considering dominant schools of thought•Identifying core complexity factors (CCF) required for project managers to manage complex projects
An optimized implementation of S-boxes has a significant impact on the performance of cryptographic primitives. SAT-based methods can find optimal implementations for moderately sized S-boxes but ...their efficiency decreases when handling complex S-boxes. To improve the efficiency of the implementations, we propose two different methods, namely OR-encoding and IF-encoding, to encode the implementations of S-boxes. Furthermore, we also simplify the encoding of the outputs of logic gates and introduce new SAT-based search methods to optimize the implementations of S-boxes. Finally, to get a better trade-off between the search results (optimized implementations of S-boxes) and the search efficiency (in terms of time complexity), an encoding scheme using local solutions is proposed. Compared to the previous methods, our algorithms are relatively simple and more efficient. For instance, when a serial software implementation is considered, then the S-boxes of Sycon, ASCON, and the <inline-formula> <tex-math notation="LaTeX">\chi</tex-math> </inline-formula> function in Xoodyak, require 6, 1, and 2 fewer programming instructions, respectively, than the best known methods. Similar improvements are obtained for hardware implementations of S-boxes in some cryptographic primitives (e.g. LBlock, RECTANGLE, PRESENT/PHOTON-Beetle, TWINE, and ASCON), with the saving of gate equivalent (GE) that range from 1.67GE to 5.34GE compared to the current best implementations. Furthermore, our model can be applied to 6-bit, 7-bit, and 8-bit S-boxes, when the considered S-boxes are of low complexity.
The advancements in the field of project management have driven researchers to take heed of numerous issues related with evaluating and managing complexity in projects, which demonstrates the evident ...significance of the subject. Among several key factors, organizational factors make up a large portion of project complexity as previous research confirms. While several project complexity measures do exist, every measure has its limit and evaluates project complexity from its own criteria. Furthermore, existing literature lacks modelling of these organizational factors to explore the interrelationships among them. This study aims to identify and model these factors to assist project managers in handling organizational factors of project complexity in a more regulated fashion. The model is developed using structural equation modelling technique. Findings include the noticeable effect of project size on project complexity as well as other factors. Positive effects of project variety and the interdependencies on project complexity are also observed.
•We model organizational factors of project complexity.•We examine interrelationships among these factors and measure them.•Increased project variety will escalate project complexity.•Increased interdependencies within the project will escalate project complexity.•Project size indirectly affects project complexity.
We study homomorphism polynomials, which are polynomials that enumerate all homomorphisms from a pattern graph
H
to
n
-vertex graphs. These polynomials have received a lot of attention recently for ...their crucial role in several new algorithms for counting and detecting graph patterns, and also for obtaining natural polynomial families which are complete for algebraic complexity classes
VBP
,
V
P
, and
VNP
. We discover that, in the monotone setting, the formula complexity, the ABP complexity, and the circuit complexity of such polynomial families are exactly characterized by the treedepth, the pathwidth, and the treewidth of the pattern graph respectively. Furthermore, we establish a single, unified framework, using our characterization, to collect several known results that were obtained independently via different methods. For instance, we attain superpolynomial separations between circuits, ABPs, and formulas in the monotone setting, where the polynomial families separating the classes all correspond to well-studied combinatorial problems. Moreover, our proofs rediscover fine-grained separations between these models for constant-degree polynomials.