In this work, hierarchical federated learning (HFL) over wireless multi-cell networks is proposed for large-scale model training while preserving data privacy. However, the imbalanced data ...distribution has a significant impact on the convergence rate and learning accuracy. In addition, a large learning latency is incurred due to the traffic load imbalance among base stations (BSs) and limited wireless resources. To cope with these challenges, we first provide an analysis of the model error and learning latency in wireless HFL. Then, joint user association and wireless resource allocation algorithms are investigated under independent identically distributed (IID) and non-IID training data, respectively. For the IID case, a learning latency aware strategy is designed to minimize the learning latency by optimizing user association and wireless resource allocation, where a mobile device selects the BS with the maximal uplink channel signal-to-noise ratio (SNR). For the non-IID case, the total data distribution distance and learning latency are jointly minimized to achieve the optimal user association and resource allocation. The results show that both data distribution and uplink channel SNR should be taken into consideration for user association in the non-IID case. Finally, the effectiveness of the proposed algorithms are demonstrated by the simulations.
We explore the feasibility of implementing a small surface code with 9 data qubits and 8 ancilla qubits, commonly referred to as surface-17, using a linear chain of 171Yb+ ions. Two-qubit gates can ...be performed between any two ions in the chain with gate time increasing linearly with ion distance. Measurement of the ion state by fluorescence requires that the ancilla qubits be physically separated from the data qubits to avoid errors on the data due to scattered photons. We minimize the time required to measure one round of stabilizers by optimizing the mapping of the two-dimensional surface code to the linear chain of ions. We develop a physically motivated Pauli error model that allows for fast simulation and captures the key sources of noise in an ion trap quantum computer including gate imperfections and ion heating. Our simulations showed a consistent requirement of a two-qubit gate fidelity of ≥99.9% for the logical memory to have a better fidelity than physical two-qubit operations. Finally, we perform an analysis of the error subsets from the importance sampling method used to bound the logical error rates to gain insight into which error sources are particularly detrimental to error correction.
There has been a steady increase in the number of studies aiming to identify DNA methylation differences associated with complex phenotypes. Many of the challenges of epigenetic epidemiology ...regarding study design and interpretation have been discussed in detail, however there are analytical concerns that are outstanding and require further exploration. In this study we seek to address three analytical issues. First, we quantify the multiple testing burden and propose a standard statistical significance threshold for identifying DNA methylation sites that are associated with an outcome. Second, we establish whether linear regression, the chosen statistical tool for the majority of studies, is appropriate and whether it is biased by the underlying distribution of DNA methylation data. Finally, we assess the sample size required for adequately powered DNA methylation association studies.
We quantified DNA methylation in the Understanding Society cohort (n = 1175), a large population based study, using the Illumina EPIC array to assess the statistical properties of DNA methylation association analyses. By simulating null DNA methylation studies, we generated the distribution of p-values expected by chance and calculated the 5% family-wise error for EPIC array studies to be 9 × 10
. Next, we tested whether the assumptions of linear regression are violated by DNA methylation data and found that the majority of sites do not satisfy the assumption of normal residuals. Nevertheless, we found no evidence that this bias influences analyses by increasing the likelihood of affected sites to be false positives. Finally, we performed power calculations for EPIC based DNA methylation studies, demonstrating that existing studies with data on ~ 1000 samples are adequately powered to detect small differences at the majority of sites.
We propose that a significance threshold of P < 9 × 10
adequately controls the false positive rate for EPIC array DNA methylation studies. Moreover, our results indicate that linear regression is a valid statistical methodology for DNA methylation studies, despite the fact that the data do not always satisfy the assumptions of this test. These findings have implications for epidemiological-based studies of DNA methylation and provide a framework for the interpretation of findings from current and future studies.
Full text
Available for:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
The present study aims at showing the major types of errors that students make in their written products and their perspectives on the cause of errors that they make. In this research, 180 students’ ...paragraphs were analysed for the purpose of error analysis and a questionnaire was distributed to the students to find out their perspectives on the cause of errors that they made. The errors found in the students’ written products were first categorised into several major categories, then further classified into four general categories: morphological, lexical, syntactical, and mechanical. The findings show that there were 12 major errors that the students made in their writing. Seven of them fall under the category of morphological, two under lexical, one under syntactical and another two under mechanical. The majority of students are aware that they make errors in their writing but find it hard to allocate them. As a result, they may not be able to correct or avoid the errors. Thus, incorporating error analysis in teaching and learning is a necessary practice for students to minimise the production of errors in their writing. Furthermore, lack of understanding of grammatical functions and limited knowledge of vocabulary are considered the major causes of errors based on the perspectives of the students. Through the classification system provided by the error analysis procedure, teachers are able to address this issue by incorporating smaller groupings of errors into the curriculum and teaching program
The successful implementation of algorithms on quantum processors relies on the accurate control of quantum bits (qubits) to perform logic gate operations. In this era of noisy intermediate-scale ...quantum (NISQ) computing, systematic miscalibrations, drift, and crosstalk in the control of qubits can lead to a coherent form of error that has no classical analog. Coherent errors severely limit the performance of quantum algorithms in an unpredictable manner, and mitigating their impact is necessary for realizing reliable quantum computations. Moreover, the average error rates measured by randomized benchmarking and related protocols are not sensitive to the full impact of coherent errors and therefore do not reliably predict the global performance of quantum algorithms, leaving us unprepared to validate the accuracy of future large-scale quantum computations. Randomized compiling is a protocol designed to overcome these performance limitations by converting coherent errors into stochastic noise, dramatically reducing unpredictable errors in quantum algorithms and enabling accurate predictions of algorithmic performance from error rates measured via cycle benchmarking. In this work, we demonstrate significant performance gains under randomized compiling for the four-qubit quantum Fourier transform algorithm and for random circuits of variable depth on a superconducting quantum processor. Additionally, we accurately predict algorithm performance using experimentally measured error rates. Our results demonstrate that randomized compiling can be utilized to leverage and predict the capabilities of modern-day noisy quantum processors, paving the way forward for scalable quantum computing.
Full text
Available for:
CMK, CTK, FMFMET, IJS, NUK, PNG, UL, UM, UPUK
•Medication errors from community pharmacies are a serious patient safety hazard.•Complexity limits the use of human reliability analysis (HRA) to predict error rates.•We create SAFPHR: Systems ...Analysis for Formal Pharmaceutical Human Reliability.•SAFPHR fixes the limitations of previous HRAs by using probabilistic model checking.•We apply SAFPHR to pharmacy dispensing and validate predictions with literature.
Display omitted
Medication errors originating in community pharmacies are a serious patient safety hazard. However, due to the complexity of the community pharmacy environment, current experimental and observational studies are insufficient to address these problems. Furthermore, the static nature of traditional, model-based human reliability analyses (HRAs) are not able to handle the dynamic environmental elements that can impact human performance. To address this issue and allow analysts to accurately predict medication error rates, we develop a new HRA called the Systems Analysis for Formal Pharmaceutical Human Reliability (SAFPH▪). This method addresses the limits of previous HRAs by combining concepts from the Cognitive Reliability and Error Analysis Method (CREAM) HRA with probabilistic model checking, a computational tool for automatically proving properties about complex, stochastic systems. In this paper, we use SAFPH▪ to analyze a common community pharmacy dispensing procedure, compare our results to published error rates, and use our results to explore interventions that could reduce error rates. We ultimately discuss our results and explore how our method could be developed in future research.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
To solve the problem of unknown noise covariance matrices inherent in the cooperative localization of autonomous underwater vehicles, a new adaptive extended Kalman filter is proposed. The predicted ...error covariance matrix and measurement noise covariance matrix are adaptively estimated based on an online expectation-maximization approach. Experimental results illustrate that, under the circumstances that are detailed in the paper, the proposed algorithm has better localization accuracy than existing state-of-the-art algorithms.
Deep learning (DL) aims at learning the meaningful representations . A meaningful representation gives rise to significant performance improvement of associated machine learning (ML) tasks by ...replacing the raw data as the input. However, optimal architecture design and model parameter estimation in DL algorithms are widely considered to be intractable. Evolutionary algorithms are much preferable for complex and nonconvex problems due to its inherent characteristics of gradient-free and insensitivity to the local optimal. In this paper, we propose a computationally economical algorithm for evolving unsupervised deep neural networks to efficiently learn meaningful representations , which is very suitable in the current big data era where sufficient labeled data for training is often expensive to acquire. In the proposed algorithm, finding an appropriate architecture and the initialized parameter values for an ML task at hand is modeled by one computational efficient gene encoding approach, which is employed to effectively model the task with a large number of parameters. In addition, a local search strategy is incorporated to facilitate the exploitation search for further improving the performance. Furthermore, a small proportion labeled data is utilized during evolution search to guarantee the learned representations to be meaningful. The performance of the proposed algorithm has been thoroughly investigated over classification tasks. Specifically, error classification rate on MNIST with 1.15% is reached by the proposed algorithm consistently, which is considered a very promising result against state-of-the-art unsupervised DL algorithms.
This research aimed to describe errors done by students in solving HOTS typed linear equation of two variables problem seen from the cognitive style in a Covid-19 pandemic era. This research uses a ...qualitative approach. Research subjects consist of 4 students selected among 31 students from class IX Junior High School 1 Petarukan, grouped into two students with cognitive style field-dependent (FD) and two students with cognitive style field-independent (FI). Subjects cognitive style was determined by Group Embedded Figures Test (GEFT) score, while errors in problem-solving were determined through TKPS and interviews. Field-dependent subjects tend to make errors in transformation and encoding with low or fair fallacy rate. On the other side, field-independent subjects tend to make errors in understanding the problems with low fallacy rate. The causes of errors made by field-dependent subjects are not being able to make complete mathematical models and not showing what they had searched. Meanwhile, field-independent subjects were incapable of making sketches perfectly. Other researchers' recommendation explains only errors in reading, transformation, processing capability, and encoding of students with cognitive style FD and FI. In this context, there are still a lot of other errors done by students. Therefore, the next researcher needs to conduct further research with better execution.