Various randomized consensus algorithms have been proposed in the literature. In some case randomness is due to the choice of a randomized network communication protocol. In other cases, randomness ...is simply caused by the potential unpredictability of the environment in which the distributed consensus algorithm is implemented. Conditions ensuring the convergence of these algorithms have already been proposed in the literature. As far as the rate of convergence of such algorithms, two approaches can be proposed. One is based on a mean square analysis, while a second is based on the concept of Lyapunov exponent. In this paper, by some concentration results, we prove that the mean square convergence analysis is the right approach when the number of agents is large. Differently from the existing literature, in this paper we do not stick to average preserving algorithms. Instead, we allow to reach consensus at a point which may differ from the average of the initial states. The advantage of such algorithms is that they do not require bidirectional communication among agents and thus they apply to more general contexts. Moreover, in many important contexts it is possible to prove that the displacement from the initial average tends to zero, when the number of agents goes to infinity.
In this paper we analyze a class of multiagent consensus dynamical systems inspired by Krause's original model. As in Krause's model, the basic assumption is the so-called bounded confidence: two ...agents can influence each other only when their state values are below a given distance threshold $R$. We study the system under an Eulerian point of view considering (possibly continuous) probability distributions of agents, and we present original convergence results. The limit distribution is always necessarily a convex combination of delta functions at least $R$ far apart from each other: in other terms these models are locally aggregating. The Eulerian perspective provides the natural framework for designing a numerical algorithm, by which we obtain several simulations in $1$ and $2$ dimensions.
Recent changes in the management of hip fracture surgery patients may have modified the epidemiology of postoperative complications.
We performed an observational study of a cohort of patients ...undergoing hip fracture surgery to update the epidemiological data on this population. The primary study outcome was the incidence of confirmed symptomatic venous thromboembolism (VTE) defined as deep vein thrombosis, pulmonary embolism (PE), or both at 3 months. Overall mortality at 1, 3 and 6 months was also evaluated.
Consecutive patients aged at least 18 years hospitalized in French public or private hospitals (531 centers) undergoing hip fracture surgery were recruited prospectively during 2 months in 2002 and a follow-up at 6 months. Predictive factors for VTE at 3 months and for death at 6 months were also analyzed.
Data from 6860 (97.3%) of the 7019 recruited patients were included in the analysis. The median age was 82 years. Low molecular weight heparins were administered perioperatively in 97.6% of patients; 69.5% received this treatment for at least 4 weeks. The actuarial rate of confirmed symptomatic VTE at 3 months was 1.34% (85 events, 95% CI: 1.04-1.64). There were 16 PEs (actuarial rate: 0.25%), three of which were fatal. Overall, 1006 (14.7%) patients were dead at 6 months. Cardiovascular disease was the most frequent cause of death (270 patients; 26.8%).
The current rate of postoperative VTE is low, but overall mortality remains high. Indeed, hip fracture patients belong to a vulnerable group of old people with comorbid diseases and a high risk of postoperative morbidity and mortality. An interdisciplinary approach could be the challenge to improve short and long-term outcome.
It is well known that a linear system controlled by a quantized feedback may exhibit the wild dynamic behavior which is typical of a nonlinear system. In the classical literature devoted to control ...with quantized feedback, the flow of information in the feedback was not considered as a critical parameter. Consequently, in that case, it was natural in the control synthesis to simply choose the quantized feedback approximating the one provided by the classical methods, and to model the quantization error as an additive white noise. On the other hand, if the flow of information has to be limited, for instance, because of the use of a transmission channel with limited capacity, some specific considerations are in order. The aim of this paper is to obtain a detailed analysis of linear scalar systems with a stabilizing quantized feedback control. First, a general framework based on a sort of Lyapunov approach encompassing known stabilization techniques is proposed. In this case, a rather complete analysis can be obtained through a nice geometric characterization of asymptotically stable closed-loop maps. In particular, a general tradeoff relation between the number of quantization intervals, quantifying the information flow, and the convergence time is established. Then, an alternative stabilization method, based on the chaotic behavior of piecewise affine maps is proposed. Finally, the performances of all these methods are compared.
Quantized feedback control has been receiving much attention in the control community in the past few years. Quantization is indeed a natural way to take into consideration in the control design the ...complexity constraints of the controller as well as the communication constraints in the information exchange between the controller and the plant. In this paper, we analyze the stabilization problem for discrete time linear systems with multidimensional state and one-dimensional input using quantized feedbacks with a memory structure, focusing on the tradeoff between complexity and performance. A quantized controller with memory is a dynamical system with a state space, a state updating map and an output map. The quantized controller complexity is modeled by means of three indexes. The first index L coincides with the number of the controller states. The second index is the number M of the possible values that the state updating map of the controller can take at each time. The third index is the number N of the possible values that the output map of the controller can take at each time. The index N corresponds also to the number of the possible control values that the controller can choose at each time. In this paper, the performance index is chosen to be the time T needed to shrink the state of the plant from a starting set to a target set. Finally, the contraction rate C, namely the ratio between the volumes of the starting and target sets, is introduced. We evaluate the relations between these parameters for various quantized stabilizers, with and without memory, and we make some comparisons. Then, we prove a number of results showing the intrinsic limitations of the quantized control. In particular, we show that, in order to obtain a control strategy which yields arbitrarily small values of T/lnC (requirement which can be interpreted as a weak form of the pole assignability property), we need to have that LN/lnC is big enough.
The capacity of finite Abelian group codes over symmetric memoryless channels is determined. For certain important examples, such as m -PSK constellations over additive white Gaussian noise (AWGN) ...channels, with m a prime power, it is shown that this capacity coincides with the Shannon capacity; i.e., there is no loss in capacity using group codes. (This had previously been known for binary-linear codes used over binary-input output-symmetric memoryless channels.) On the other hand, a counterexample involving a three-dimensional geometrically uniform constellation is presented in which the use of Abelian group codes leads to a loss in capacity. The error exponent of the average group code is determined, and it is shown to be bounded away from the random-coding error exponent, at low rates, for finite Abelian groups not admitting Galois field structure.
Summary Background The impact of Clostridium difficile infection (CDI) on healthcare costs is significant due to the extra costs of associated inpatient care. However, the specific contribution of ...recurrences has rarely been studied. Aim The aim of this study was to estimate the hospital costs of CDI and the fraction attributable to recurrences in French acute-care hospitals. Methods A retrospective study was performed for 2011 on a sample of 12 large acute-care hospitals. CDI costs were estimated from both hospital and public insurance perspectives. For each stay, CDI additional costs were estimated by comparison to controls without CDI extracted from the national DRG (diagnosis-related group) database and matched on DRG, age and sex. When CDI was the primary diagnosis, the full cost of stay was used. Findings A total of 1067 bacteriological cases of CDI were identified corresponding to 979 stays involving 906 different patients. Recurrence(s) were identified in 118 (12%) of these stays with 51.7% of them having occurred within the same stay as the index episode. Their mean length of stay was 63.8 days compared to 25.1 days for stays with an index case only. The mean extra cost per stay with CDI was estimated at €9,575 (median: €7,514). The extra cost of CDI in public acute-care hospitals was extrapolated to €163.1 million at the national level, of which 12.5% was attributable to recurrences. Conclusion The economic burden of CDI is substantial and directly impacts healthcare systems in France.
There is unmet need in patients suffering from chronic pain, yet innovation may be impeded by the difficulty of justifying economic value in a field beset by data limitations and methodological ...variability. A systematic review was conducted to identify and summarise the key areas of variability and limitations in modelling approaches in the economic evaluation of treatments for chronic pain. The results of the literature review were then used to support the development of a fully flexible open-source economic model structure, designed to test structural and data assumptions and act as a reference for future modelling practice. The key model design themes identified from the systematic review included: time horizon; titration and stabilisation; number of treatment lines; choice/ordering of treatment; and the impact of parameter uncertainty (given reliance on expert opinion). Exploratory analyses using the model to compare a hypothetical novel therapy versus morphine as first-line treatments showed cost-effectiveness results to be sensitive to structural and data assumptions. Assumptions about the treatment pathway and choice of time horizon were key model drivers. Our results suggest structural model design and data assumptions may have driven previous cost-effectiveness results and ultimately decisions based on economic value. We therefore conclude that it is vital that future economic models in chronic pain are designed to be fully transparent and hope our open-source code is useful in order to aspire to a common approach to modelling pain that includes robust sensitivity analyses to test structural and parameter uncertainty.
In this paper, the ensembles of repeat multiple- accumulate codes (RAm), which are obtained by interconnecting a repeater with a cascade of m accumulate codes through uniform random interleavers, are ...analyzed. It is proved that the average spectral shapes of these code ensembles are equal to 0 below a threshold distance epsiv m and, moreover, they form a nonincreasing sequence in m converging uniformly to the maximum between the average spectral shape of the linear random ensemble and 0. Consequently the sequence epsiv m converges to the Gilbert-Varshamov (GV) distance. A further analysis allows to conclude that if m ges 2 the RA m are asymptotically good and that epsiv m is the typical normalized minimum distance when the interleaver length goes to infinity. Combining the two results it is possible to conclude that the typical distance of the ensembles RA m converges to the Gilbert-Varshamov bound.
In this paper, ensembles of parallel concatenated codes are studied and rigorous results on their asymptotic performance, under the assumption of maximum-likelihood (ML) decoding, are presented. In ...particular, it is proven that in any parallel concatenation scheme with k branches where all k encoders are recursive and the Bhattacharyya parameter of the channel is sufficiently small, the bit-error rate (BER) and the word-error rate go to 0 exactly like N 1- k and N 2- k , respectively. Different types of ensembles by changing the subgroup of permutations used to interconnect the various encoders, are considered.