In this article, we study the communication, and (sub)gradient computation costs in distributed optimization. We present two algorithms based on the framework of the accelerated penalty method with ...increasing penalty parameters. Our first algorithm is for smooth distributed optimization, and it obtains the near optimal <inline-formula><tex-math notation="LaTeX">O(\sqrt{\frac{L}{\epsilon (1-\sigma _2(W))}}\log \frac{1}{\epsilon })</tex-math></inline-formula> communication complexity, and the optimal <inline-formula><tex-math notation="LaTeX">O(\sqrt{\frac{L}{\epsilon }})</tex-math></inline-formula> gradient computation complexity for <inline-formula><tex-math notation="LaTeX">L</tex-math></inline-formula>-smooth convex problems, where <inline-formula><tex-math notation="LaTeX">\sigma _2(W)</tex-math></inline-formula> denotes the second largest singular value of the weight matrix <inline-formula><tex-math notation="LaTeX">W</tex-math></inline-formula> associated to the network, and <inline-formula><tex-math notation="LaTeX">\epsilon</tex-math></inline-formula> is the target accuracy. When the problem is <inline-formula><tex-math notation="LaTeX">\mu</tex-math></inline-formula>-strongly convex, and <inline-formula><tex-math notation="LaTeX">L</tex-math></inline-formula>-smooth, our algorithm has the near optimal <inline-formula><tex-math notation="LaTeX">O(\sqrt{\frac{L}{\mu (1-\sigma _2(W))}}\log ^2\frac{1}{\epsilon })</tex-math></inline-formula> complexity for communications, and the optimal <inline-formula><tex-math notation="LaTeX">O(\sqrt{\frac{L}{\mu }}\log \frac{1}{\epsilon })</tex-math></inline-formula> complexity for gradient computations. Our communication complexities are only worse by a factor of <inline-formula><tex-math notation="LaTeX">(\log \frac{1}{\epsilon })</tex-math></inline-formula> than the lower bounds. Our second algorithm is designed for nonsmooth distributed optimization, and it achieves both the optimal <inline-formula><tex-math notation="LaTeX">O(\frac{1}{\epsilon \sqrt{1-\sigma _2(W)}})</tex-math></inline-formula> communication complexity, and <inline-formula><tex-math notation="LaTeX">O(\frac{1}{\epsilon ^2})</tex-math></inline-formula> subgradient computation complexity, which match the lower bounds for nonsmooth distributed optimization.
The celebrated minimax principle of Yao says that for any Boolean-valued function f with finite domain, there is a distribution μ over the domain of f such that computing f to error ε against inputs ...from μ is just as hard as computing f to error ε on worst-case inputs. Notably, however, the distribution μ depends on the target error level ε: the hard distribution which is tight for bounded error might be trivial to solve to small bias, and the hard distribution which is tight for a small bias level might be far from tight for bounded error levels. In this work, we introduce a new type of minimax theorem which can provide a hard distribution μ that works for all bias levels at once. We show that this works for randomized query complexity, randomized communication complexity, some randomized circuit models, quantum query and communication complexities, approximate polynomial degree, and approximate logrank. We also prove an improved version of Impagliazzo’s hardcore lemma. Our proofs rely on two innovations over the classical approach of using Von Neumann’s minimax theorem or linear programming duality. First, we use Sion’s minimax theorem to prove a minimax theorem for ratios of bilinear functions representing the cost and score of algorithms. Second, we introduce a new way to analyze low-bias randomized algorithms by viewing them as “forecasting algorithms” evaluated by a certain proper scoring rule. The expected score of the forecasting version of a randomized algorithm appears to be a more fine-grained way of analyzing the bias of the algorithm. We show that such expected scores have many elegant mathematical properties—for example, they can be amplified linearly instead of quadratically. We anticipate forecasting algorithms will find use in future work in which a fine-grained analysis of small-bias algorithms is required.
•Many L2 writing studies focused on only a handful of CALF metrics.•Most syntactic complexity metrics were associated with oral language production.•Results found significant effects of task ...complexity features on written L2 CALF.•Results found no clear support for the cognition hypothesis.•Results may be explained via Kellogg’s model of working memory in L1 writing.
This study, a research synthesis and quantitative meta-analysis, contributes to recent L2 writing research on task complexity and its impact on the syntactic complexity, accuracy, lexical complexity, and fluency (CALF) of written L2 production. Through a systematic analysis of task-based L2 writing research from 1998 to the present, the study aimed to better understand (a) how task complexity has been manipulated in previous research, (b) the range of metrics used in previous research to quantify L2 written CALF, and (c) the specific effects of task complexity manipulation on L2 written CALF. The results of the research synthesis indicate that a handful of task complexity features have received a great deal of attention compared to other, less studied task complexity features. Further, the results of the research synthesis suggest that many studies rely on relatively few metrics of CALF, often focusing on metrics of syntactic complexity associated with complex forms more typical of oral language production (Biber, 1988; Biber & Conrad, 2009; Biber & Gray, 2010; Biber, Gray, & Poonpon, 2011, 2013). The results of the quantitative meta-analysis indicate significant effects of increased resource-directing and resource-dispersing features of task complexity on the CALF of written L2 production. The results offer no clear support for the cognition hypothesis (Robinson, 2001, 2003, 2005, 2011), but rather suggest that features of task complexity may promote attention to the formulation and monitoring systems of the writing process (Kellogg, 1996; Kellogg, Whiteford, Turner, Cahill, & Mertens, 2013).
Display omitted
• The paper distinguishes between three types of supply chain complexity: static, dynamic and decision making. • It classifies supply chain complexity drivers as: internal, ...supply/demand interface, and external/environmental. • It is possible to make use of systemic property of supply chains to shift complexity from one driver to another. • There is a need to reduce/prevent the unnecessary complexity, and manage the necessary complexity.
Studies on supply chain complexity mainly use the static and dynamic complexity distinction. While static complexity describes the structure of the supply chain, the number and the variety of its components and strengths of interactions between these; the dynamic complexity represents the uncertainty in the supply chain and involves the aspects of time and randomness. This distinction is also valid when classifying the drivers of supply chain complexity according to the way they are generated. Supply chain complexity drivers (e.g., number/variety of suppliers, number/variety of customers, number/variety of interactions, conflicting policies, demand amplification, differing/conflicting/non-synchronized decisions and actions, incompatible IT systems) play a significant and varying role in dealing with complexity of the different types of supply chains (e.g., food, chemical, electronics, automotive).
This paper reviews the typical complexity drivers that are faced in different types of supply chains and presents the complexity driver and solution strategy pairings, in the form of a matrix. Drivers and strategies are extracted from real-life supply chain situations gathered from multiple existing sources; such as reports, archives, observations, interviews. The synthesis of good practices would assist decision-makers in formulating appropriate strategies to deal with complexity in their supply chains.
We introduce the binary value principle which is a simple subset-sum instance expressing that a natural number written in binary cannot be negative, relating it to central problems in proof and ...algebraic complexity. We prove conditional superpolynomial lower bounds on the Ideal Proof System (IPS) refutation size of this instance, based on a well-known hypothesis by Shub and Smale about the hardness of computing factorials, where IPS is the strong algebraic proof system introduced by Grochow and Pitassi (2018). Conversely, we show that short IPS refutations of this instance bridge the gap between sufficiently strong algebraic and semi-algebraic proof systems. Our results extend to full-fledged IPS the paradigm introduced in Forbes et al. (2016), whereby lower bounds against subsystems of IPS were obtained using restricted algebraic circuit lower bounds, and demonstrate that the binary value principle captures the advantage of semi-algebraic over algebraic reasoning, for sufficiently strong systems. Specifically, we show the following:
Conditional IPS lower bounds: The Shub-Smale hypothesis (1995) implies a superpolynomial lower bound on the size of IPS refutations of the binary value principle over the rationals defined as the unsatisfiable linear equation ∑ i=1 n 2 i−1 x i = −1, for boolean x i ’s. Further, the related τ-conjecture (1995) implies a superpolynomial lower bound on the size of IPS refutations of a variant of the binary value principle over the ring of rational functions. No prior conditional lower bounds were known for IPS or for apparently much weaker propositional proof systems such as Frege.
Algebraic vs. semi-algebraic proofs: Admitting short refutations of the binary value principle is necessary for any algebraic proof system to fully simulate any known semi-algebraic proof system, and for strong enough algebraic proof systems it is also sufficient. In particular, we introduce a very strong proof system that simulates all known semi-algebraic proof systems (and most other known concrete propositional proof systems), under the name Cone Proof System (CPS), as a semi-algebraic analogue of the ideal proof system: CPS establishes the unsatisfiability of collections of polynomial equalities and inequalities over the reals, by representing sum-of-squares proofs (and extensions) as algebraic circuits. We prove that IPS is polynomially equivalent to CPS iff IPS admits polynomial-size refutations of the binary value principle (for the language of systems of equations that have no 0/1-solutions), over both ℤ and ℚ.
We present a contact planner for complex legged locomotion tasks: standing up, climbing stairs using a handrail, crossing rubble, and getting out of a car. The need for such a planner was shown at ...the DARPA Robotics Challenge, where such behaviors could not be demonstrated (except for egress). Current planners suffer from their prohibitive algorithmic complexity because they deploy a tree of robot configurations projected in contact with the environment. We tackle this issue by introducing a reduction property: the reachability condition. This condition defines a geometric approximation of the contact manifold, which is of low dimension, presents a Cartesian topology, and can be efficiently sampled and explored. The hard contact planning problem can then be decomposed into two subproblems: first, we plan a path for the root without considering the whole-body configuration, using a sampling-based algorithm; then, we generate a discrete sequence of whole-body configurations in static equilibrium along this path, using a deterministic contact-selection algorithm. The reduction breaks the algorithm complexity encountered in previous works, resulting in the first interactive implementation of a contact planner (open source). While no contact planner has yet been proposed with theoretical completeness, we empirically show the interest of our framework: in a few seconds, with high success rates, we generate complex contact plans for various scenarios and two robots: HRP-2 and HyQ. These plans are validated in dynamic simulations or on the real HRP-2 robot.
In recent years, a clear trend toward simplification emerged in the development of robotic hands. The use of soft robotic approaches has been a useful tool in this prospective, enabling complexity ...reduction by embodying part of grasping intelligence in the hand mechanical structure. Several hand prototypes designed according to such principles have accomplished good results in terms of grasping simplicity, robustness, and reliability. Among them, the Pisa/IIT SoftHand demonstrated the feasibility of a large variety of grasping tasks, by means of only one actuator and an opportunely designed tendon-driven differential mechanism. However, the use of a single degree of actuation prevents the execution of more complex tasks, like fine preshaping of fingers and in-hand manipulation. While possible in theory, simply doubling the Pisa/IIT SoftHand actuation system has several disadvantages, e.g., in terms of space and mechanical complexity. To overcome these limitations, we propose a novel design framework for tendon-driven mechanisms, in which the main idea is to turn transmission friction from a disturbance into a design tool. In this way, the degrees of actuation (DoAs) can be doubled with little additional complexity. By leveraging on this idea, we design a novel robotic hand, the Pisa/IIT SoftHand 2. We present here its design, modeling, control, and experimental validation. The hand demonstrates that by opportunely combining only two DoAs with hand softness, a large variety of grasping and manipulation tasks can be performed, only relying on the intelligence embodied in the mechanism. Examples include rotating objects with different shapes, opening a jar, and pouring coffee from a glass.
Complexity is a term applied throughout the project management field, and project complexity typically presents additional management challenges to achieving project objectives. Without an ...appropriate approach to assess and manage project complexity, project teams frequently face difficulties in executing their projects. This study provides a framework to develop a tool which can effectively measure and assess complexity levels of a project. This tool was designed with a "Complexity Measurement Matrix", and is comprised of 37 complexity indicators (CIs) which have proven to be significant to describing project complexity. The complexity measurement scales were developed by normalizing data gathered from a survey of 44 completed projects. The weight factors of the 37 CIs were calculated based on three rounds of Delphi method. The developed tool generates a set of comprehensive reports and provides users with the overall project complexity level, a series of radar diagrams describing the most important indicators, and associated strategies to manage each indicator. This study contributes to the body of knowledge by providing a pioneering approach to assessing project complexity and aids practitioners in facilitating project complexity management processes identifying the most important complexity contributors and focusing on managing complexity-associated challenges.
We consider the ε-Consensus-Halving problem, in which a set of heterogeneous agents aim at dividing a continuous resource into two (not necessarily contiguous) portions that all of them ...simultaneously consider to be of approximately the same value (up to ε). This problem was recently shown to be PPA-complete, for n agents and n cuts, even for very simple valuation functions. In a quest to understand the root of the complexity of the problem, we consider the setting where there is only a constant number of agents, and we consider both the computational complexity and the query complexity of the problem.
For agents with monotone valuation functions, we show a dichotomy: for two agents the problem is polynomial-time solvable, whereas for three or more agents it becomes PPA-complete. Similarly, we show that for two monotone agents the problem can be solved with polynomially-many queries, whereas for three or more agents, we provide exponential query complexity lower bounds. These results are enabled via an interesting connection to a monotone Borsuk-Ulam problem, which may be of independent interest. For agents with general valuations, we show that the problem is PPA-complete and admits exponential query complexity lower bounds, even for two agents.
The emerging orthogonal time frequency space (OTFS) modulation technique has shown its superiority to the current orthogonal frequency division multiplexing (OFDM) scheme, in terms of its ...capabilities of exploiting full time-frequency diversity and coping with channel dynamics. The optimal maximum a posteriori (MAP) detection is capable of eliminating the negative impacts of the inter-symbol interference in the delay-Doppler (DD) domain at the expense of a prohibitively high complexity. To reduce the receiver complexity for OTFS scheme, this paper proposes a variational Bayes (VB) approach as an approximation of the optimal MAP detection. Compared to the widely used message passing algorithm, we prove that the proposed iterative algorithm is guaranteed to converge to the global optimum of the approximated MAP detector regardless the resulting factor graph is loopy or not. Simulation results validate the fast convergence of the proposed VB receiver and also show its promising performance gain compared to the conventional message passing algorithm.