Since the 1990s linguistic complexity has become an important issue in second language acquisition (SLA) research and teaching: second language (L2) learners want to know how well they are ...progressing, while teachers and researchers are interested to find out which grade of complexity can be associated with a particular proficiency level. After a short sketch of the background to the construct of complexity, the paper presents an overview of how complexity is measured in SLA, how it is related to other constructs of language proficiency (in particular accuracy and fluency), and by which factors complexity may be affected: these concern both internal linguistic factors and external factors, like task-related features and type of instruction. The paper concludes with directions for future research, focusing on the need for non-redundant, valid and reliable measures, more developmental measures, a broader scope of complexity, combined cross-linguistic and longitudinal research, and more research in instructional practice.
Variation in communicative complexity has been conceptually and empirically attributed to social complexity, with animals living in more complex social environments exhibiting more signals and/or ...more complex signals than animals living in simpler social environments. As compelling as studies highlighting a link between social and communicative variables are, this hypothesis remains challenged by operational problems, contrasting results, and several weaknesses of the associated tests. Specifically, how to best operationalize social and communicative complexity remains debated; alternative hypotheses, such as the role of a species’ ecology, morphology, or phylogenetic history, have been neglected; and the actual ways in which variation in signaling is directly affected by social factors remain largely unexplored. In this review, we address these three issues and propose an extension of the “social complexity hypothesis for communicative complexity” that resolves and acknowledges the above factors. We specifically argue for integrating the inherently multimodal nature of communication into a more comprehensive framework and for acknowledging the social context of derived signals and the potential of audience effects. By doing so, we believe it will be possible to generate more accurate predictions about which specific social parameters may be responsible for selection on new or more complex signals, as well as to uncover potential adaptive functions that are not necessarily apparent from studying communication in only one modality.
On solving a convex-concave bilinear saddle-point problem (SPP), there have been many works studying the complexity results of first-order methods. These results are all about upper complexity ...bounds, which can determine at most how many iterations would guarantee a solution of desired accuracy. In this paper, we pursue the opposite direction by deriving lower complexity bounds of first-order methods on large-scale SPPs. Our results apply to the methods whose iterates are in the linear span of past first-order information, as well as more general methods that produce their iterates in an arbitrary manner based on first-order information. We first work on the affinely constrained smooth convex optimization that is a special case of SPP. Different from gradient method on unconstrained problems, we show that first-order methods on affinely constrained problems generally cannot be accelerated from the known convergence rate
O
(1 /
t
) to
O
(
1
/
t
2
)
, and in addition,
O
(1 /
t
) is optimal for convex problems. Moreover, we prove that for strongly convex problems,
O
(
1
/
t
2
)
is the best possible convergence rate, while it is known that gradient methods can have linear convergence on unconstrained problems. Then we extend these results to general SPPs. It turns out that our lower complexity bounds match with several established upper complexity bounds in the literature, and thus they are tight and indicate the optimality of several existing first-order methods.
Maximum-Order Complexity and 2-Adic Complexity Chen, Zhiru; Chen, Zhixiong; Obrovsky, Jakob ...
IEEE transactions on information theory,
2024-Aug., 2024-8-00, Volume:
70, Issue:
8
Journal Article
Peer reviewed
The 2-adic complexity has been well-analyzed in the periodic case. However, we are not aware of any theoretical results in the aperiodic case. In particular, the Nth 2-adic complexity has not been ...studied for any promising candidate of a pseudorandom sequence of finite length N. Also nothing seems be known for a part of the period of length N of any cryptographically interesting periodic sequence. Here we introduce the first method for this aperiodic case. More precisely, we study the relation between Nth maximum-order complexity and Nth 2-adic complexity of binary sequences and prove a lower bound on the Nth 2-adic complexity in terms of the Nth maximum-order complexity. Then any known lower bound on the Nth maximum-order complexity implies a lower bound on the Nth 2-adic complexity of the same order of magnitude. In the periodic case, one can prove a slightly better result. The latter bound is sharp, which is illustrated by the maximum-order complexity of <inline-formula> <tex-math notation="LaTeX">\ell </tex-math></inline-formula>-sequences. The idea of the proof helps us to characterize the maximum-order complexity of periodic sequences in terms of the unique rational number defined by the sequence. We also show that a periodic sequence of maximal maximum-order complexity must be also of maximal 2-adic complexity.
In recent years, the rapid development of Internet, Internet of Things, and Cloud Computing have led to the explosive growth of data in almost every industry and business area. Big data has rapidly ...developed into a hot topic that attracts extensive attention from academia, industry, and governments around the world. In this position paper, we first briefly introduce the concept of big data, including its definition, features, and value. We then identify from different perspectives the significance and opportunities that big data brings to us. Next, we present representative big data initiatives all over the world. We describe the grand challenges (namely, data complexity, computational complexity, and system complexity), as well as possible solutions to address these challenges. Finally, we conclude the paper by presenting several suggestions on carrying out big data projects.
In 1979, Valiant showed that the complexity class VPe of families with polynomially bounded formula size is contained in the class VPs of families that have algebraic branching programs (ABPs) of ...polynomially bounded size. Motivated by the problem of separating these classes, we study the topological closure VPe, i.e., the class of polynomials that can be approximated arbitrarily closely by polynomials in VPe. We describe VPe using the well-known continuant polynomial (in characteristic different from 2). Further understanding this polynomial seems to be a promising route to new formula size lower bounds. Our methods are rooted in the study of ABPs of small constant width. In 1992, Ben-Or and Cleve showed that formula size is polynomially equivalent to width-3 ABP size. We extend their result (in characteristic different from 2) by showing that approximate formula size is polynomially equivalent to approximate width-2 ABP size. This is surprising because in 2011 Allender and Wang gave explicit polynomials that cannot be computed by width-2 ABPs at all! The details of our construction lead to the aforementioned characterization of VPe. As a natural continuation of this work, we prove that the class VPN can be described as the class of families that admit a hypercube summation of polynomially bounded dimension over a product of polynomially many affine linear forms. This gives the first separations of algebraic complexity classes from their nondeterministic analogs.
Social complexity has been one of the recent emerging topics in the study of animal and human societies, but the concept remains both poorly defined and understood. In this paper, I critically review ...definitions and studies of social complexity in invertebrate and vertebrate societies, arguing that the concept is being used inconsistently in studies of vertebrate sociality. Group size and cohesion define one cornerstone of social complexity, but the nature and patterning of social interactions contribute more to interspecific variation in social complexity in species with individual recognition and repeated interactions. Humans provide the only example where many other unique criteria are used, and they are the only species for which intraspecific variation in social complexity has been studied in detail. While there is agreement that complex patterns emerge at the group level as a result of simple interactions and as a result of cognitive abilities, there is consensus neither on their relative importance nor on the role of specific cognitive abilities in different lineages. Moreover, aspects of reproduction and parental care have also been invoked to characterize levels of social complexity, so that no single comprehensive measure is readily available. Because even fundamental components of social complexity are difficult to compare across studies and species because of inconsistent definitions and operationalization of key social traits, I define and characterize social organization, social structure, mating system, and care system as distinct components of a social system. Based on this framework, I outline how different aspects of the evolution of social complexity are being studied and suggest questions for future research.
The next-generation Versatile Video Coding (VVC) standard introduces a new Multi-Type Tree (MTT) block partitioning structure that supports Binary-Tree (BT) and Ternary-Tree (TT) splits in both ...vertical and horizontal directions. This new approach leads to five possible splits at each block depth. It thereby improves the coding efficiency of VVC over that of the preceding High Efficiency Video Coding (HEVC) standard, which only supports Quad-Tree (QT) partitioning with a single split per block depth. However, MTT also has brought a considerable impact on encoder computational complexity. This paper proposes a two-stage learning-based technique to tackle the complexity overhead of MTT in VVC intra encoders. In our scheme, the input block is first processed by a Convolutional Neural Network (CNN) to predict its spatial features through a vector of probabilities describing the partition at each 4×4 edge. Subsequently, a Decision Tree (DT) model leverages this vector of spatial features to predict the most likely splits at each block. Finally, based on this prediction, only the N most likely splits are processed by the Rate-Distortion (RD) process of the encoder. In order to train our CNN and DT models on a wide range of image contents, we also propose a public VVC frame partitioning dataset based on existing image dataset encoded with the VVC reference software encoder. Our solution relying on the top-3 configuration reaches 47.4% complexity reduction for a negligible bitrate increase of 0.79%. A top-2 configuration enables a higher complexity reduction of 70.4% for 2.49% bitrate loss. These results emphasize a better trade-off between VTM intra-coding efficiency and complexity reduction compared to the state-of-the-art solutions. The source code of the proposed method and the training dataset are made publicly available at GitHub.
Descriptive complexity provides intrinsic, that is,machine-independent, characterizations of the major complexity classes. On the other hand, logic can be useful for designing programs in a natural ...declarative way. This is particularly important for parallel computation models such as cellular automata, because designing parallel programs is considered a difficult task.This paper establishes three logical characterizations of the three classical complexity classes modeling minimal time, called real-time, of one-dimensional cellular automata according to their canonical variants: unidirectional or bidirectional communication, input word given in a parallel or sequential way.Our three logics are natural restrictions of existential second-order Horn logic with built-in successor and predecessor functions. These logics correspond exactly to the three ways of deciding a language on a square grid circuit of side n according to one of the three canonical locations of an input word of length n: along a side of the grid, on the diagonal that contains the output cell, or on the diagonal opposite to the output cell.The key ingredient of our results is a normalization method that transforms a formula from one of our three logics into an equivalent normalized formula that faithfully mimics a grid circuit.Then, we extend our logics by allowing a limited use of negation on hypotheses like in Stratified Datalog. By revisiting in detail a number of representative classical problems - recognition of the set of primes by Fisher’s algorithm, Dyck language recognition, Firing Squad Synchronization problem,etc. - we show that this extension makes easier programming and we prove that it does not change the complexity of our logics in real-time.Finally, starting from our experience in expressing those representative problems in logic, we argue that our logics are high-level programming languages: they allow to express in a natural,precise and synthetic way the algorithms of literature, based on signals, and to translate them automatically into cellular automata of the same complexity.