DeepVCA: Deep Video Complexity Analyzer Amirpour, Hadi; Schoeffmann, Klaus; Ghanbari, Mohammad ...
IEEE transactions on circuits and systems for video technology,
2024
Journal Article
Peer reviewed
Open access
Video streaming and its applications are growing rapidly, making video optimization a primary target for content providers looking to enhance their services. Enhancing the quality of videos requires ...the adjustment of different encoding parameters such as bitrate, resolution, and frame rate. To avoid brute force approaches for predicting optimal encoding parameters, video complexity features are typically extracted and utilized. To predict optimal encoding parameters effectively, content providers traditionally use unsupervised feature extraction methods, such as ITU-T's Spatial Information ( SI ) and Temporal Information ( TI ) to represent the spatial and temporal complexity of video sequences. Recently, Video Complexity Analyzer (VCA) was introduced to extract DCT-based features to represent the complexity of a video sequence (or parts thereof). These unsupervised features, however, cannot accurately predict video encoding parameters. To address this issue, this paper introduces a novel supervised feature extraction method named DeepVCA, which extracts the spatial and temporal complexity of video sequences using deep neural networks. In this approach, the encoding bits required to encode each frame in intra-mode and inter-mode are used as labels for spatial and temporal complexity, respectively. Initially, we benchmark various deep neural network structures to predict spatial complexity. We then leverage the similarity of features used to predict the spatial complexity of the current frame and its previous frame to rapidly predict temporal complexity. This approach is particularly useful as the temporal complexity may depend not only on the differences between two consecutive frames but also on their spatial complexity. Our proposed approach demonstrates significant improvement over unsupervised methods, especially for temporal complexity. As an example application, we verify the effectiveness of these features in predicting the encoding bitrate and encoding time of video sequences, which are crucial tasks in video streaming. The source code and dataset is available at https://github.com/cd-athena/ DeepVCA.
The research explores the historical development of project complexity. Projects are becoming more complex due to unexpected emergent behaviour and characteristics. Complexity has become an ...inseparable aspect of systems and also one of the important factors in the failure of projects. While much has been written about project complexity, there is still a lack of understanding of what constitutes project complexity. This research includes a systematic literature review to demonstrate the current understanding of commonalities and differences in the existing research. This was achieved by examining more than 420 published research papers, drawn from an original group of approximately 10,000, based on citations during the period of 1990–2015. As a result of this exploration, an integrative systemic framework is presented to demonstrate understanding of project complexity.
It was found that there are three primary and distinctive models of project complexity, the Project Management Institute view, the System of Systems view and the view developed from the analysis of citations of research papers, which is called the Complexity Theories view. Further testing is required on a range of complex projects in order to attempt to reconcile these views.
•Providing a clarification to the ontology/epistemology of project complexity (subjective & objective)•Exploring historical development of project complexity with considering dominant schools of thought•Identifying core complexity factors (CCF) required for project managers to manage complex projects
The complexity of physician power Nimmon, Laura
Science (American Association for the Advancement of Science),
2024-May-17, Volume:
384, Issue:
6697
Journal Article
Peer reviewed
Open access
Inequitable variation in physician effort and resource use is revealed.
An optimized implementation of S-boxes has a significant impact on the performance of cryptographic primitives. SAT-based methods can find optimal implementations for moderately sized S-boxes but ...their efficiency decreases when handling complex S-boxes. To improve the efficiency of the implementations, we propose two different methods, namely OR-encoding and IF-encoding, to encode the implementations of S-boxes. Furthermore, we also simplify the encoding of the outputs of logic gates and introduce new SAT-based search methods to optimize the implementations of S-boxes. Finally, to get a better trade-off between the search results (optimized implementations of S-boxes) and the search efficiency (in terms of time complexity), an encoding scheme using local solutions is proposed. Compared to the previous methods, our algorithms are relatively simple and more efficient. For instance, when a serial software implementation is considered, then the S-boxes of Sycon, ASCON, and the <inline-formula> <tex-math notation="LaTeX">\chi</tex-math> </inline-formula> function in Xoodyak, require 6, 1, and 2 fewer programming instructions, respectively, than the best known methods. Similar improvements are obtained for hardware implementations of S-boxes in some cryptographic primitives (e.g. LBlock, RECTANGLE, PRESENT/PHOTON-Beetle, TWINE, and ASCON), with the saving of gate equivalent (GE) that range from 1.67GE to 5.34GE compared to the current best implementations. Furthermore, our model can be applied to 6-bit, 7-bit, and 8-bit S-boxes, when the considered S-boxes are of low complexity.
The advancements in the field of project management have driven researchers to take heed of numerous issues related with evaluating and managing complexity in projects, which demonstrates the evident ...significance of the subject. Among several key factors, organizational factors make up a large portion of project complexity as previous research confirms. While several project complexity measures do exist, every measure has its limit and evaluates project complexity from its own criteria. Furthermore, existing literature lacks modelling of these organizational factors to explore the interrelationships among them. This study aims to identify and model these factors to assist project managers in handling organizational factors of project complexity in a more regulated fashion. The model is developed using structural equation modelling technique. Findings include the noticeable effect of project size on project complexity as well as other factors. Positive effects of project variety and the interdependencies on project complexity are also observed.
•We model organizational factors of project complexity.•We examine interrelationships among these factors and measure them.•Increased project variety will escalate project complexity.•Increased interdependencies within the project will escalate project complexity.•Project size indirectly affects project complexity.
We study homomorphism polynomials, which are polynomials that enumerate all homomorphisms from a pattern graph
H
to
n
-vertex graphs. These polynomials have received a lot of attention recently for ...their crucial role in several new algorithms for counting and detecting graph patterns, and also for obtaining natural polynomial families which are complete for algebraic complexity classes
VBP
,
V
P
, and
VNP
. We discover that, in the monotone setting, the formula complexity, the ABP complexity, and the circuit complexity of such polynomial families are exactly characterized by the treedepth, the pathwidth, and the treewidth of the pattern graph respectively. Furthermore, we establish a single, unified framework, using our characterization, to collect several known results that were obtained independently via different methods. For instance, we attain superpolynomial separations between circuits, ABPs, and formulas in the monotone setting, where the polynomial families separating the classes all correspond to well-studied combinatorial problems. Moreover, our proofs rediscover fine-grained separations between these models for constant-degree polynomials.
The celebrated minimax principle of Yao says that for any Boolean-valued function f with finite domain, there is a distribution μ over the domain of f such that computing f to error ε against inputs ...from μ is just as hard as computing f to error ε on worst-case inputs. Notably, however, the distribution μ depends on the target error level ε: the hard distribution which is tight for bounded error might be trivial to solve to small bias, and the hard distribution which is tight for a small bias level might be far from tight for bounded error levels. In this work, we introduce a new type of minimax theorem which can provide a hard distribution μ that works for all bias levels at once. We show that this works for randomized query complexity, randomized communication complexity, some randomized circuit models, quantum query and communication complexities, approximate polynomial degree, and approximate logrank. We also prove an improved version of Impagliazzo’s hardcore lemma. Our proofs rely on two innovations over the classical approach of using Von Neumann’s minimax theorem or linear programming duality. First, we use Sion’s minimax theorem to prove a minimax theorem for ratios of bilinear functions representing the cost and score of algorithms. Second, we introduce a new way to analyze low-bias randomized algorithms by viewing them as “forecasting algorithms” evaluated by a certain proper scoring rule. The expected score of the forecasting version of a randomized algorithm appears to be a more fine-grained way of analyzing the bias of the algorithm. We show that such expected scores have many elegant mathematical properties—for example, they can be amplified linearly instead of quadratically. We anticipate forecasting algorithms will find use in future work in which a fine-grained analysis of small-bias algorithms is required.
•Many L2 writing studies focused on only a handful of CALF metrics.•Most syntactic complexity metrics were associated with oral language production.•Results found significant effects of task ...complexity features on written L2 CALF.•Results found no clear support for the cognition hypothesis.•Results may be explained via Kellogg’s model of working memory in L1 writing.
This study, a research synthesis and quantitative meta-analysis, contributes to recent L2 writing research on task complexity and its impact on the syntactic complexity, accuracy, lexical complexity, and fluency (CALF) of written L2 production. Through a systematic analysis of task-based L2 writing research from 1998 to the present, the study aimed to better understand (a) how task complexity has been manipulated in previous research, (b) the range of metrics used in previous research to quantify L2 written CALF, and (c) the specific effects of task complexity manipulation on L2 written CALF. The results of the research synthesis indicate that a handful of task complexity features have received a great deal of attention compared to other, less studied task complexity features. Further, the results of the research synthesis suggest that many studies rely on relatively few metrics of CALF, often focusing on metrics of syntactic complexity associated with complex forms more typical of oral language production (Biber, 1988; Biber & Conrad, 2009; Biber & Gray, 2010; Biber, Gray, & Poonpon, 2011, 2013). The results of the quantitative meta-analysis indicate significant effects of increased resource-directing and resource-dispersing features of task complexity on the CALF of written L2 production. The results offer no clear support for the cognition hypothesis (Robinson, 2001, 2003, 2005, 2011), but rather suggest that features of task complexity may promote attention to the formulation and monitoring systems of the writing process (Kellogg, 1996; Kellogg, Whiteford, Turner, Cahill, & Mertens, 2013).
High efficiency video coding (HEVC) significantly reduces bit rates over the preceding H.264 standard but at the expense of extremely high encoding complexity. In HEVC, the quad-tree partition of the ...coding unit (CU) consumes a large proportion of the HEVC encoding complexity, due to the brute-force search for rate-distortion optimization (RDO). Therefore, this paper proposes a deep learning approach to predict the CU partition for reducing the HEVC complexity at both intra-and inter-modes, which is based on convolutional neural network (CNN) and long- and short-term memory (LSTM) network. First, we establish a large-scale database including substantial CU partition data for the HEVC intra- and inter-modes. This enables deep learning on the CU partition. Second, we represent the CU partition of an entire coding tree unit in the form of a hierarchical CU partition map (HCPM). Then, we propose an early terminated hierarchical CNN (ETH-CNN) for learning to predict the HCPM. Consequently, the encoding complexity of intra-mode HEVC can be drastically reduced by replacing the brute-force search with ETH-CNN to decide the CU partition. Third, an ETH-LSTM is proposed to learn the temporal correlation of the CU partition. Then, we combine the ETH-LSTM and the ETH-CNN to predict the CU partition for reducing the HEVC complexity at inter-mode. Finally, experimental results show that our approach outperforms the other state-of-the-art approaches in reducing the HEVC complexity at both intra- and inter-modes.
In this article, we study the communication, and (sub)gradient computation costs in distributed optimization. We present two algorithms based on the framework of the accelerated penalty method with ...increasing penalty parameters. Our first algorithm is for smooth distributed optimization, and it obtains the near optimal <inline-formula><tex-math notation="LaTeX">O(\sqrt{\frac{L}{\epsilon (1-\sigma _2(W))}}\log \frac{1}{\epsilon })</tex-math></inline-formula> communication complexity, and the optimal <inline-formula><tex-math notation="LaTeX">O(\sqrt{\frac{L}{\epsilon }})</tex-math></inline-formula> gradient computation complexity for <inline-formula><tex-math notation="LaTeX">L</tex-math></inline-formula>-smooth convex problems, where <inline-formula><tex-math notation="LaTeX">\sigma _2(W)</tex-math></inline-formula> denotes the second largest singular value of the weight matrix <inline-formula><tex-math notation="LaTeX">W</tex-math></inline-formula> associated to the network, and <inline-formula><tex-math notation="LaTeX">\epsilon</tex-math></inline-formula> is the target accuracy. When the problem is <inline-formula><tex-math notation="LaTeX">\mu</tex-math></inline-formula>-strongly convex, and <inline-formula><tex-math notation="LaTeX">L</tex-math></inline-formula>-smooth, our algorithm has the near optimal <inline-formula><tex-math notation="LaTeX">O(\sqrt{\frac{L}{\mu (1-\sigma _2(W))}}\log ^2\frac{1}{\epsilon })</tex-math></inline-formula> complexity for communications, and the optimal <inline-formula><tex-math notation="LaTeX">O(\sqrt{\frac{L}{\mu }}\log \frac{1}{\epsilon })</tex-math></inline-formula> complexity for gradient computations. Our communication complexities are only worse by a factor of <inline-formula><tex-math notation="LaTeX">(\log \frac{1}{\epsilon })</tex-math></inline-formula> than the lower bounds. Our second algorithm is designed for nonsmooth distributed optimization, and it achieves both the optimal <inline-formula><tex-math notation="LaTeX">O(\frac{1}{\epsilon \sqrt{1-\sigma _2(W)}})</tex-math></inline-formula> communication complexity, and <inline-formula><tex-math notation="LaTeX">O(\frac{1}{\epsilon ^2})</tex-math></inline-formula> subgradient computation complexity, which match the lower bounds for nonsmooth distributed optimization.