As the modern world becomes increasingly digitized and interconnected, distributed signal processing has proven to be effective in processing its large volume of data. However, a main challenge ...limiting the broad use of distributed signal processing techniques is the issue of privacy in handling sensitive data. To address this privacy issue, we propose a novel yet general subspace perturbation method for privacy-preserving distributed optimization, which allows each node to obtain the desired solution while protecting its private data. In particular, we show that the dual variable introduced in each distributed optimizer will not converge in a certain subspace determined by the graph topology. Additionally, the optimization variable is ensured to converge to the desired solution, because it is orthogonal to this non-convergent subspace. We therefore propose to insert noise in the non-convergent subspace through the dual variable such that the private data are protected, and the accuracy of the desired solution is completely unaffected. Moreover, the proposed method is shown to be secure under two widely-used adversary models: passive and eavesdropping. Furthermore, we consider several distributed optimizers such as ADMM and PDMM to demonstrate the general applicability of the proposed method. Finally, we test the performance through a set of applications. Numerical tests indicate that the proposed method is superior to existing methods in terms of several parameters like estimated accuracy, privacy level, communication cost and convergence rate.
Sparse Linear Prediction and Its Applications to Speech Processing Giacobello, D.; Christensen, M. G.; Murthi, M. N. ...
IEEE transactions on audio, speech and language processing/IEEE transactions on audio, speech, and language processing,
07/2012, Letnik:
20, Številka:
5
Journal Article
Recenzirano
Odprti dostop
The aim of this paper is to provide an overview of Sparse Linear Prediction, a set of speech processing tools created by introducing sparsity constraints into the linear prediction framework. These ...tools have shown to be effective in several issues related to modeling and coding of speech signals. For speech analysis, we provide predictors that are accurate in modeling the speech production process and overcome problems related to traditional linear prediction. In particular, the predictors obtained offer a more effective decoupling of the vocal tract transfer function and its underlying excitation, making it a very efficient method for the analysis of voiced speech. For speech coding, we provide predictors that shape the residual according to the characteristics of the sparse encoding techniques resulting in more straightforward coding strategies. Furthermore, encouraged by the promising application of compressed sensing in signal compression, we investigate its formulation and application to sparse linear predictive coding. The proposed estimators are all solutions to convex optimization problems, which can be solved efficiently and reliably using, e.g., interior-point methods. Extensive experimental results are provided to support the effectiveness of the proposed methods, showing the improvements over traditional linear prediction in both speech analysis and coding.
In this paper, we consider the problem of separating and enhancing periodic signals from single-channel noisy mixtures. More specifically, the problem of designing filters for such tasks is treated. ...We propose a number of novel filter designs that 1) are specifically aimed at periodic signals, 2) are optimal given the observed signal and thus signal adaptive, 3) offer full parametrizations of periodic signals, and 4) reduce to well-known designs in special cases. The found filters can be used for a multitude of applications including processing of speech and audio signals. Some illustrative signal examples demonstrating its superior properties as compared to other related filters are given and the properties of the various designs are analyzed using synthetic signals in Monte Carlo simulations.
Kernel adaptive filters (KAF) have emerged as a prominent method for nonlinear system identification (NSI). However, the KAF becomes computationally intensive as the input signal grows. In complex ...systems, traditional KAF with a unit time-delay structure may struggle with insufficient control capability. Moreover, KAFs adapted to second-order statistics can be susceptible to non-Gaussian noise. In this letter, we introduce the Laguerre kernel adaptive filter (LKAF) for NSI, using a block-oriented nonlinear model. The LKAF leverages the Laguerre series to approximate the linear block, benefiting from infinite impulse response (IIR) characteristics and a simple feed forward structure. To address non-Gaussian noise, the LKAF employs an arctangent (AT) criterion. This integration leads to the development of the Laguerre kernel arctangent least mean square (L-KATLMS) algorithm and its variations, which utilize random Fourier approximation. Simulation results demonstrate the superiority of our proposed algorithms for NSI.
•Privacy, accuracy and communication efficiency are major concerns in distributed computing systems and they are achieved simultaneously with the proposed approach.•Information theoretical ...privacy-preserving methods often incur a privacy and communication bandwidth trade-off.•The connection of quantization and privacy-preservation is established with adaptive differential quantization.•Accuracy is not compromised by considering communication efficiency and privacy.•The result is of high practical values.
Privacy issues and communication cost are both major concerns in distributed optimization in networks. There is often a trade-off between them because the encryption methods used for privacy-preservation often require expensive communication overhead. To address these issues, we, in this paper, propose a quantization-based approach to achieve both communication efficient and privacy-preserving solutions in the context of distributed optimization. By deploying an adaptive differential quantization scheme, we allow each node in the network to achieve its optimum solution with a low communication cost while keeping its private data unrevealed. Additionally, the proposed approach is general and can be applied in various distributed optimization methods, such as the primal-dual method of multipliers (PDMM) and the alternating direction method of multipliers (ADMM). We consider two widely used adversary models, passive and eavesdropping, and investigate the properties of the proposed approach using different applications and demonstrate its superior performance compared to existing privacy-preserving approaches in terms of both accuracy and communication cost.
With the emergence of more advanced separation networks, significant progress has been made in time-domain speech separation methods. These methods typically use a temporal encoder–decoder structure ...to encode speech feature sequences, thereby accomplishing the separation task. However, due to the limitation of traditional encoder–decoder structure, the separation performance decreases sharply when the encoded sequence is short, and when encoded sequence is sufficiently long, the separation performance improves, but which leads to an increase in computational complexity and training cost. Therefore, this paper compresses and reconstructs the speech feature sequence through a multi-layer convolution structure, and proposes a multi-layer encoder–decoder time-domain speech separation model (MLED). In this model, our encoder–decoder structure can compress speech sequence to a short length while ensuring the separation performance does not decrease. And combined with our multi-scale temporal attention (MSTA) separation network, MLED achieves efficient and precise separation of short encoded sequences. Therefore, compared to previous advanced time-domain separation methods, our experiments show that MLED achieves competitive separation performance with smaller model size, lower computational complexity, and training cost.
•Our designed encoder-decoder network is more effective in shorter encoded sequence.•Since encoded sequence is shorter, MLED can efficiently performs separation task.•MLED can better balance performance, model size, computational and training costs.
Traditional functional linked neural networks (FLNNs) impose a significant computational burden due to their input expansion, primarily stemming from the utilization of digital filters. This paper ...presents a Laguerre FLNNs filter for nonlinear active noise control (NANC) systems. By employing the truncated Laguerre series, the presented filter achieves effective approximation of long primary paths with a reduced filter length. Moreover, we develop adaptive algorithms rooted in information-theoretic learning (ITL) within the framework of the Laguerre-FLNNs NANC model. Using the ITL criterions, a Laguerre filtered-s maximum correntropy criterion (LFsMCC) algorithm is derived and a Laguerre filtered-s quantized minimum error entropy criterion (LFsQMEE) algorithm is proposed by minimizing Renyi’s quadratic entropy. To reduce the computation cost, an online vector quantization method is utilized to improve the LFsQMEE. This technique selectively quantizes the error vectors, reducing them to a smaller subset of samples within the codebook. Moreover, an enhanced LFsQMEE with a fiducial point is introduced. The steady-state performance and the computational complexity are analyzed. Theoretical analysis is validated through simulations and the control performance of the proposed model and algorithms is tested in experiments with both simulated and real paths.
Robust adaptive filters utilizing hyperbolic cosine and correntropy functions have been successfully employed in non-Gaussian noisy environments. However, these filters suffer from high steady-state ...misalignment due to significant weight update in the presences of outliers. In addition, several practical systems exhibit sparse characteristics, which is not taken into account by these filters. In this paper, a generalized soft-root-sign (GSRS) function is proposed and the corresponding GSRS adaptive filter is designed. The proposed GSRS provides negligible weight update in the occurrence of large outliers and thereby results in lower steady-state misalignment. To further improve modelling performance for sparse systems and to achieve robustness, sparsity-aware GSRS algorithms are also developed in this paper. The bound on learning rate and the computational complexity of proposed algorithm is also investigated. Simulation studies confirmed the improved convergence characteristics achieved by the proposed algorithms over existing algorithms.