In recent years, a clear trend toward simplification emerged in the development of robotic hands. The use of soft robotic approaches has been a useful tool in this prospective, enabling complexity ...reduction by embodying part of grasping intelligence in the hand mechanical structure. Several hand prototypes designed according to such principles have accomplished good results in terms of grasping simplicity, robustness, and reliability. Among them, the Pisa/IIT SoftHand demonstrated the feasibility of a large variety of grasping tasks, by means of only one actuator and an opportunely designed tendon-driven differential mechanism. However, the use of a single degree of actuation prevents the execution of more complex tasks, like fine preshaping of fingers and in-hand manipulation. While possible in theory, simply doubling the Pisa/IIT SoftHand actuation system has several disadvantages, e.g., in terms of space and mechanical complexity. To overcome these limitations, we propose a novel design framework for tendon-driven mechanisms, in which the main idea is to turn transmission friction from a disturbance into a design tool. In this way, the degrees of actuation (DoAs) can be doubled with little additional complexity. By leveraging on this idea, we design a novel robotic hand, the Pisa/IIT SoftHand 2. We present here its design, modeling, control, and experimental validation. The hand demonstrates that by opportunely combining only two DoAs with hand softness, a large variety of grasping and manipulation tasks can be performed, only relying on the intelligence embodied in the mechanism. Examples include rotating objects with different shapes, opening a jar, and pouring coffee from a glass.
Complexity is a term applied throughout the project management field, and project complexity typically presents additional management challenges to achieving project objectives. Without an ...appropriate approach to assess and manage project complexity, project teams frequently face difficulties in executing their projects. This study provides a framework to develop a tool which can effectively measure and assess complexity levels of a project. This tool was designed with a "Complexity Measurement Matrix", and is comprised of 37 complexity indicators (CIs) which have proven to be significant to describing project complexity. The complexity measurement scales were developed by normalizing data gathered from a survey of 44 completed projects. The weight factors of the 37 CIs were calculated based on three rounds of Delphi method. The developed tool generates a set of comprehensive reports and provides users with the overall project complexity level, a series of radar diagrams describing the most important indicators, and associated strategies to manage each indicator. This study contributes to the body of knowledge by providing a pioneering approach to assessing project complexity and aids practitioners in facilitating project complexity management processes identifying the most important complexity contributors and focusing on managing complexity-associated challenges.
We consider the ε-Consensus-Halving problem, in which a set of heterogeneous agents aim at dividing a continuous resource into two (not necessarily contiguous) portions that all of them ...simultaneously consider to be of approximately the same value (up to ε). This problem was recently shown to be PPA-complete, for n agents and n cuts, even for very simple valuation functions. In a quest to understand the root of the complexity of the problem, we consider the setting where there is only a constant number of agents, and we consider both the computational complexity and the query complexity of the problem.
For agents with monotone valuation functions, we show a dichotomy: for two agents the problem is polynomial-time solvable, whereas for three or more agents it becomes PPA-complete. Similarly, we show that for two monotone agents the problem can be solved with polynomially-many queries, whereas for three or more agents, we provide exponential query complexity lower bounds. These results are enabled via an interesting connection to a monotone Borsuk-Ulam problem, which may be of independent interest. For agents with general valuations, we show that the problem is PPA-complete and admits exponential query complexity lower bounds, even for two agents.
The emerging orthogonal time frequency space (OTFS) modulation technique has shown its superiority to the current orthogonal frequency division multiplexing (OFDM) scheme, in terms of its ...capabilities of exploiting full time-frequency diversity and coping with channel dynamics. The optimal maximum a posteriori (MAP) detection is capable of eliminating the negative impacts of the inter-symbol interference in the delay-Doppler (DD) domain at the expense of a prohibitively high complexity. To reduce the receiver complexity for OTFS scheme, this paper proposes a variational Bayes (VB) approach as an approximation of the optimal MAP detection. Compared to the widely used message passing algorithm, we prove that the proposed iterative algorithm is guaranteed to converge to the global optimum of the approximated MAP detector regardless the resulting factor graph is loopy or not. Simulation results validate the fast convergence of the proposed VB receiver and also show its promising performance gain compared to the conventional message passing algorithm.
Variation in communicative complexity has been conceptually and empirically attributed to social complexity, with animals living in more complex social environments exhibiting more signals and/or ...more complex signals than animals living in simpler social environments. As compelling as studies highlighting a link between social and communicative variables are, this hypothesis remains challenged by operational problems, contrasting results, and several weaknesses of the associated tests. Specifically, how to best operationalize social and communicative complexity remains debated; alternative hypotheses, such as the role of a species’ ecology, morphology, or phylogenetic history, have been neglected; and the actual ways in which variation in signaling is directly affected by social factors remain largely unexplored. In this review, we address these three issues and propose an extension of the “social complexity hypothesis for communicative complexity” that resolves and acknowledges the above factors. We specifically argue for integrating the inherently multimodal nature of communication into a more comprehensive framework and for acknowledging the social context of derived signals and the potential of audience effects. By doing so, we believe it will be possible to generate more accurate predictions about which specific social parameters may be responsible for selection on new or more complex signals, as well as to uncover potential adaptive functions that are not necessarily apparent from studying communication in only one modality.
On solving a convex-concave bilinear saddle-point problem (SPP), there have been many works studying the complexity results of first-order methods. These results are all about upper complexity ...bounds, which can determine at most how many iterations would guarantee a solution of desired accuracy. In this paper, we pursue the opposite direction by deriving lower complexity bounds of first-order methods on large-scale SPPs. Our results apply to the methods whose iterates are in the linear span of past first-order information, as well as more general methods that produce their iterates in an arbitrary manner based on first-order information. We first work on the affinely constrained smooth convex optimization that is a special case of SPP. Different from gradient method on unconstrained problems, we show that first-order methods on affinely constrained problems generally cannot be accelerated from the known convergence rate
O
(1 /
t
) to
O
(
1
/
t
2
)
, and in addition,
O
(1 /
t
) is optimal for convex problems. Moreover, we prove that for strongly convex problems,
O
(
1
/
t
2
)
is the best possible convergence rate, while it is known that gradient methods can have linear convergence on unconstrained problems. Then we extend these results to general SPPs. It turns out that our lower complexity bounds match with several established upper complexity bounds in the literature, and thus they are tight and indicate the optimality of several existing first-order methods.
The 2-adic complexity has been well-analyzed in the periodic case. However, we are not aware of any theoretical results in the aperiodic case. In particular, the N th 2-adic complexity has not been ...studied for any promising candidate of a pseudorandom sequence of finite length N . Also nothing seems be known for a part of the period of length N of any cryptographically interesting periodic sequence. Here we introduce the first method for this aperiodic case. More precisely, we study the relation between N th maximum-order complexity and N th 2-adic complexity of binary sequences and prove a lower bound on the N th 2-adic complexity in terms of the N th maximum-order complexity. Then any known lower bound on the N th maximum-order complexity implies a lower bound on the N th 2-adic complexity of the same order of magnitude. In the periodic case, one can prove a slightly better result. The latter bound is sharp, which is illustrated by the maximum-order complexity of ℓ-sequences. The idea of the proof helps us to characterize the maximum-order complexity of periodic sequences in terms of the unique rational number defined by the sequence. We also show that a periodic sequence of maximal maximum-order complexity must be also of maximal 2-adic complexity.
Versatile Video Coding (VVC), as the latest standard, significantly improves the coding efficiency over its predecessor standard High Efficiency Video Coding (HEVC), but at the expense of sharply ...increased complexity. In VVC, the quad-tree plus multi-type tree (QTMT) structure of the coding unit (CU) partition accounts for over 97% of the encoding time, due to the brute-force search for recursive rate-distortion (RD) optimization. Instead of the brute-force QTMT search, this paper proposes a deep learning approach to predict the QTMT-based CU partition, for drastically accelerating the encoding process of intra-mode VVC. First, we establish a large-scale database containing sufficient CU partition patterns with diverse video content, which can facilitate the data-driven VVC complexity reduction. Next, we propose a multi-stage exit CNN (MSE-CNN) model with an early-exit mechanism to determine the CU partition, in accord with the flexible QTMT structure at multiple stages. Then, we design an adaptive loss function for training the MSE-CNN model, synthesizing both the uncertain number of split modes and the target on minimized RD cost. Finally, a multi-threshold decision scheme is developed, achieving a desirable trade-off between complexity and RD performance. The experimental results demonstrate that our approach can reduce the encoding time of VVC by 44.65%~66.88% with a negligible Bjøntegaard delta bit-rate (BD-BR) of 1.322%~3.188%, significantly outperforming other state-of-the-art approaches.