•Many L2 writing studies focused on only a handful of CALF metrics.•Most syntactic complexity metrics were associated with oral language production.•Results found significant effects of task ...complexity features on written L2 CALF.•Results found no clear support for the cognition hypothesis.•Results may be explained via Kellogg’s model of working memory in L1 writing.
This study, a research synthesis and quantitative meta-analysis, contributes to recent L2 writing research on task complexity and its impact on the syntactic complexity, accuracy, lexical complexity, and fluency (CALF) of written L2 production. Through a systematic analysis of task-based L2 writing research from 1998 to the present, the study aimed to better understand (a) how task complexity has been manipulated in previous research, (b) the range of metrics used in previous research to quantify L2 written CALF, and (c) the specific effects of task complexity manipulation on L2 written CALF. The results of the research synthesis indicate that a handful of task complexity features have received a great deal of attention compared to other, less studied task complexity features. Further, the results of the research synthesis suggest that many studies rely on relatively few metrics of CALF, often focusing on metrics of syntactic complexity associated with complex forms more typical of oral language production (Biber, 1988; Biber & Conrad, 2009; Biber & Gray, 2010; Biber, Gray, & Poonpon, 2011, 2013). The results of the quantitative meta-analysis indicate significant effects of increased resource-directing and resource-dispersing features of task complexity on the CALF of written L2 production. The results offer no clear support for the cognition hypothesis (Robinson, 2001, 2003, 2005, 2011), but rather suggest that features of task complexity may promote attention to the formulation and monitoring systems of the writing process (Kellogg, 1996; Kellogg, Whiteford, Turner, Cahill, & Mertens, 2013).
Display omitted
• The paper distinguishes between three types of supply chain complexity: static, dynamic and decision making. • It classifies supply chain complexity drivers as: internal, ...supply/demand interface, and external/environmental. • It is possible to make use of systemic property of supply chains to shift complexity from one driver to another. • There is a need to reduce/prevent the unnecessary complexity, and manage the necessary complexity.
Studies on supply chain complexity mainly use the static and dynamic complexity distinction. While static complexity describes the structure of the supply chain, the number and the variety of its components and strengths of interactions between these; the dynamic complexity represents the uncertainty in the supply chain and involves the aspects of time and randomness. This distinction is also valid when classifying the drivers of supply chain complexity according to the way they are generated. Supply chain complexity drivers (e.g., number/variety of suppliers, number/variety of customers, number/variety of interactions, conflicting policies, demand amplification, differing/conflicting/non-synchronized decisions and actions, incompatible IT systems) play a significant and varying role in dealing with complexity of the different types of supply chains (e.g., food, chemical, electronics, automotive).
This paper reviews the typical complexity drivers that are faced in different types of supply chains and presents the complexity driver and solution strategy pairings, in the form of a matrix. Drivers and strategies are extracted from real-life supply chain situations gathered from multiple existing sources; such as reports, archives, observations, interviews. The synthesis of good practices would assist decision-makers in formulating appropriate strategies to deal with complexity in their supply chains.
We present a contact planner for complex legged locomotion tasks: standing up, climbing stairs using a handrail, crossing rubble, and getting out of a car. The need for such a planner was shown at ...the DARPA Robotics Challenge, where such behaviors could not be demonstrated (except for egress). Current planners suffer from their prohibitive algorithmic complexity because they deploy a tree of robot configurations projected in contact with the environment. We tackle this issue by introducing a reduction property: the reachability condition. This condition defines a geometric approximation of the contact manifold, which is of low dimension, presents a Cartesian topology, and can be efficiently sampled and explored. The hard contact planning problem can then be decomposed into two subproblems: first, we plan a path for the root without considering the whole-body configuration, using a sampling-based algorithm; then, we generate a discrete sequence of whole-body configurations in static equilibrium along this path, using a deterministic contact-selection algorithm. The reduction breaks the algorithm complexity encountered in previous works, resulting in the first interactive implementation of a contact planner (open source). While no contact planner has yet been proposed with theoretical completeness, we empirically show the interest of our framework: in a few seconds, with high success rates, we generate complex contact plans for various scenarios and two robots: HRP-2 and HyQ. These plans are validated in dynamic simulations or on the real HRP-2 robot.
In recent years, a clear trend toward simplification emerged in the development of robotic hands. The use of soft robotic approaches has been a useful tool in this prospective, enabling complexity ...reduction by embodying part of grasping intelligence in the hand mechanical structure. Several hand prototypes designed according to such principles have accomplished good results in terms of grasping simplicity, robustness, and reliability. Among them, the Pisa/IIT SoftHand demonstrated the feasibility of a large variety of grasping tasks, by means of only one actuator and an opportunely designed tendon-driven differential mechanism. However, the use of a single degree of actuation prevents the execution of more complex tasks, like fine preshaping of fingers and in-hand manipulation. While possible in theory, simply doubling the Pisa/IIT SoftHand actuation system has several disadvantages, e.g., in terms of space and mechanical complexity. To overcome these limitations, we propose a novel design framework for tendon-driven mechanisms, in which the main idea is to turn transmission friction from a disturbance into a design tool. In this way, the degrees of actuation (DoAs) can be doubled with little additional complexity. By leveraging on this idea, we design a novel robotic hand, the Pisa/IIT SoftHand 2. We present here its design, modeling, control, and experimental validation. The hand demonstrates that by opportunely combining only two DoAs with hand softness, a large variety of grasping and manipulation tasks can be performed, only relying on the intelligence embodied in the mechanism. Examples include rotating objects with different shapes, opening a jar, and pouring coffee from a glass.
Complexity is a term applied throughout the project management field, and project complexity typically presents additional management challenges to achieving project objectives. Without an ...appropriate approach to assess and manage project complexity, project teams frequently face difficulties in executing their projects. This study provides a framework to develop a tool which can effectively measure and assess complexity levels of a project. This tool was designed with a "Complexity Measurement Matrix", and is comprised of 37 complexity indicators (CIs) which have proven to be significant to describing project complexity. The complexity measurement scales were developed by normalizing data gathered from a survey of 44 completed projects. The weight factors of the 37 CIs were calculated based on three rounds of Delphi method. The developed tool generates a set of comprehensive reports and provides users with the overall project complexity level, a series of radar diagrams describing the most important indicators, and associated strategies to manage each indicator. This study contributes to the body of knowledge by providing a pioneering approach to assessing project complexity and aids practitioners in facilitating project complexity management processes identifying the most important complexity contributors and focusing on managing complexity-associated challenges.
We consider the ε-Consensus-Halving problem, in which a set of heterogeneous agents aim at dividing a continuous resource into two (not necessarily contiguous) portions that all of them ...simultaneously consider to be of approximately the same value (up to ε). This problem was recently shown to be PPA-complete, for n agents and n cuts, even for very simple valuation functions. In a quest to understand the root of the complexity of the problem, we consider the setting where there is only a constant number of agents, and we consider both the computational complexity and the query complexity of the problem.
For agents with monotone valuation functions, we show a dichotomy: for two agents the problem is polynomial-time solvable, whereas for three or more agents it becomes PPA-complete. Similarly, we show that for two monotone agents the problem can be solved with polynomially-many queries, whereas for three or more agents, we provide exponential query complexity lower bounds. These results are enabled via an interesting connection to a monotone Borsuk-Ulam problem, which may be of independent interest. For agents with general valuations, we show that the problem is PPA-complete and admits exponential query complexity lower bounds, even for two agents.
The emerging orthogonal time frequency space (OTFS) modulation technique has shown its superiority to the current orthogonal frequency division multiplexing (OFDM) scheme, in terms of its ...capabilities of exploiting full time-frequency diversity and coping with channel dynamics. The optimal maximum a posteriori (MAP) detection is capable of eliminating the negative impacts of the inter-symbol interference in the delay-Doppler (DD) domain at the expense of a prohibitively high complexity. To reduce the receiver complexity for OTFS scheme, this paper proposes a variational Bayes (VB) approach as an approximation of the optimal MAP detection. Compared to the widely used message passing algorithm, we prove that the proposed iterative algorithm is guaranteed to converge to the global optimum of the approximated MAP detector regardless the resulting factor graph is loopy or not. Simulation results validate the fast convergence of the proposed VB receiver and also show its promising performance gain compared to the conventional message passing algorithm.
Variation in communicative complexity has been conceptually and empirically attributed to social complexity, with animals living in more complex social environments exhibiting more signals and/or ...more complex signals than animals living in simpler social environments. As compelling as studies highlighting a link between social and communicative variables are, this hypothesis remains challenged by operational problems, contrasting results, and several weaknesses of the associated tests. Specifically, how to best operationalize social and communicative complexity remains debated; alternative hypotheses, such as the role of a species’ ecology, morphology, or phylogenetic history, have been neglected; and the actual ways in which variation in signaling is directly affected by social factors remain largely unexplored. In this review, we address these three issues and propose an extension of the “social complexity hypothesis for communicative complexity” that resolves and acknowledges the above factors. We specifically argue for integrating the inherently multimodal nature of communication into a more comprehensive framework and for acknowledging the social context of derived signals and the potential of audience effects. By doing so, we believe it will be possible to generate more accurate predictions about which specific social parameters may be responsible for selection on new or more complex signals, as well as to uncover potential adaptive functions that are not necessarily apparent from studying communication in only one modality.
On solving a convex-concave bilinear saddle-point problem (SPP), there have been many works studying the complexity results of first-order methods. These results are all about upper complexity ...bounds, which can determine at most how many iterations would guarantee a solution of desired accuracy. In this paper, we pursue the opposite direction by deriving lower complexity bounds of first-order methods on large-scale SPPs. Our results apply to the methods whose iterates are in the linear span of past first-order information, as well as more general methods that produce their iterates in an arbitrary manner based on first-order information. We first work on the affinely constrained smooth convex optimization that is a special case of SPP. Different from gradient method on unconstrained problems, we show that first-order methods on affinely constrained problems generally cannot be accelerated from the known convergence rate
O
(1 /
t
) to
O
(
1
/
t
2
)
, and in addition,
O
(1 /
t
) is optimal for convex problems. Moreover, we prove that for strongly convex problems,
O
(
1
/
t
2
)
is the best possible convergence rate, while it is known that gradient methods can have linear convergence on unconstrained problems. Then we extend these results to general SPPs. It turns out that our lower complexity bounds match with several established upper complexity bounds in the literature, and thus they are tight and indicate the optimality of several existing first-order methods.