The performance of reproducing kernel Hilbert space-based methods is known to be sensitive to the choice of the reproducing kernel. Choosing an adequate reproducing kernel can be challenging and ...computationally demanding, especially in data-rich tasks without prior information about the solution domain. In this paper, we propose a learning scheme that scalably combines several single kernel-based online methods to reduce the kernel-selection bias. The proposed learning scheme applies to any task formulated as a regularized empirical risk minimization convex problem. More specifically, our learning scheme is based on a multi-kernel learning formulation that can be applied to widen any single-kernel solution space, thus increasing the possibility of finding higher-performance solutions. In addition, it is parallelizable, allowing for the distribution of the computational load across different computing units. We show experimentally that the proposed learning scheme outperforms the combined single-kernel online methods separately in terms of the cumulative regularized least squares cost metric.
The task of reconstructing smooth signals from streamed data in the form of signal samples arises in various applications. This work addresses such a task subject to a zero-delay response; that is, ...the smooth signal must be reconstructed sequentially as soon as a data sample is available and without having access to subsequent data. State-of-the-art approaches solve this problem by interpolating consecutive data samples using splines. Here, each interpolation step yields a piece that ensures a smooth signal reconstruction while minimizing a cost metric, typically a weighted sum between the squared residual and a derivative-based measure of smoothness. As a result, a zero-delay interpolation is achieved in exchange for an almost certainly higher cumulative cost as compared to interpolating all data samples together. This paper presents a novel approach to further reduce this cumulative cost on average. First, we formulate a zero-delay smoothing spline interpolation problem from a sequential decision-making perspective, allowing us to model the future impact of each interpolated piece on the average cumulative cost. Then, an interpolation method is proposed to exploit the temporal dependencies between the streamed data samples. Our method is assisted by a recurrent neural network and accordingly trained to reduce the accumulated cost on average over a set of example data samples collected from the same signal source generating the signal to be reconstructed. Finally, we present extensive experimental results for synthetic and real data showing how our approach outperforms the abovementioned state-of-the-art.
A large number of applications in Wireless Sensor Networks include projecting a vector of noisy observations onto a subspace dictated by prior information about the field being monitored. In general, ...accomplishing such a task in a centralized fashion, entails a large power consumption, congestion at certain nodes and suffers from robustness issues against possible node failures. Computing such projections in a decentralized fashion is an alternative solution that solves these issues. Recent works have shown that this task can be done via the so-called graph filters where only local inter-node communication is performed in a distributed manner using a graph shift operator. Most of the existing methods have focused on the design of graph filters for symmetric topologies to compute an exact subspace projection. However, in this paper, motivated by the asymmetric communications in Wireless Sensor Networks, we analyze the design of graph shift operators to perform decentralized subspace projection for asymmetric topologies. we first characterize the existence of solutions and then we present a convex optimization problem that considers also the efficiency of the graph filtering, with an ADMM-based solver.
The performance of reproducing kernel Hilbert space-based methods is known to be sensitive to the choice of the reproducing kernel. Choosing an adequate reproducing kernel can be challenging and ...computationally demanding, especially in data-rich tasks without prior information about the solution domain. In this paper, we propose a learning scheme that scalably combines several single kernel-based online methods to reduce the kernel-selection bias. The proposed learning scheme applies to any task formulated as a regularized empirical risk minimization convex problem. More specifically, our learning scheme is based on a multi-kernel learning formulation that can be applied to widen any single-kernel solution space, thus increasing the possibility of finding higher-performance solutions. In addition, it is parallelizable, allowing for the distribution of the computational load across different computing units. We show experimentally that the proposed learning scheme outperforms the combined single-kernel online methods separately in terms of the cumulative regularized least squares cost metric.
Kernel-based approaches have achieved noticeable success as non-parametric regression methods under the framework of stochastic optimization. However, most of the kernel-based methods in the ...literature are not suitable to track sequentially streamed quantized data samples from dynamic environments. This shortcoming occurs mainly for two reasons: first, their poor versatility in tracking variables that may change unpredictably over time, primarily because of their lack of flexibility when choosing a functional cost that best suits the associated regression problem; second, their indifference to the smoothness of the underlying physical signal generating those samples. This work introduces a novel algorithm constituted by an online regression problem that accounts for these two drawbacks and a stochastic proximal method that exploits its structure. In addition, we provide tracking guarantees by analyzing the dynamic regret of our algorithm. Finally, we present some experimental results that support our theoretical analysis and show that our algorithm has a favorable performance compared to the state-of-the-art.
Digitalizing real-world analog signals typically involves sampling in time and discretizing in amplitude. Subsequent signal reconstructions inevitably incur an error that depends on the amplitude ...resolution and the temporal density of the acquired samples. From an implementation viewpoint, consistent signal reconstruction methods have proven a profitable error-rate decay as the sampling rate increases. Despite that, these results are obtained under offline settings. Therefore, a research gap exists regarding methods for consistent signal reconstruction from data streams. Solving this problem is of great importance because such methods could run at a lower computational cost than the existing offline ones or be used under real-time requirements without losing the benefits of ensuring consistency. In this paper, we formalize for the first time the concept of consistent signal reconstruction from streaming time-series data. Then, we present a signal reconstruction method able to enforce consistency and also exploit the spatiotemporal dependencies of streaming multivariate time-series data to further reduce the signal reconstruction error. Our experiments show that our proposed method achieves a favorable error-rate decay with the sampling rate compared to a similar but non-consistent reconstruction.
The task of reconstructing smooth signals from streamed data in the form of signal samples arises in various applications. This work addresses such a task subject to a zero-delay response; that is, ...the smooth signal must be reconstructed sequentially as soon as a data sample is available and without having access to subsequent data. State-of-the-art approaches solve this problem by interpolating consecutive data samples using splines. Here, each interpolation step yields a piece that ensures a smooth signal reconstruction while minimizing a cost metric, typically a weighted sum between the squared residual and a derivative-based measure of smoothness. As a result, a zero-delay interpolation is achieved in exchange for an almost certainly higher cumulative cost as compared to interpolating all data samples together. This paper presents a novel approach to further reduce this cumulative cost on average. First, we formulate a zero-delay smoothing spline interpolation problem from a sequential decision-making perspective, allowing us to model the future impact of each interpolated piece on the average cumulative cost. Then, an interpolation method is proposed to exploit the temporal dependencies between the streamed data samples. Our method is assisted by a recurrent neural network and accordingly trained to reduce the accumulated cost on average over a set of example data samples collected from the same signal source generating the signal to be reconstructed. Finally, we present extensive experimental results for synthetic and real data showing how our approach outperforms the abovementioned state-of-the-art.
EN In different controller design frameworks, the design process is divided in two steps. First, a controller is designed to make invariant a given manifold which may be given implicitly by some ...target dynamics. Second, another control layer is designed to make that manifold attractive. We want to concentrate on the first step, taking the nominal parameters dynamics and constructively producing a controller such that the closed-loop system trajectories comply with a given restriction. This is in the form of a desired first integral or as a dynamical system which may, for example, represent the error evolution. The controllers may be static or dynamic and the method should apply to implicit systems. The systems, though, must be defined by differential polynomials. Thus enabling the use of the computational methods from differential algebra.
ES En diferentes marcos de diseño de controladores, el proceso de diseño se divide en dos pasos. Primero, un controlador se diseña para hacer invariante una variedad dada que puede venir definida implícitamente por alguna dinámica objetivo. Segundo, otra capa de control se diseña para hacer que la variedad antes mencionada sea atractiva. Nos queremos centrar en el primer paso, tomando la dinámica con parámetros nominales y definir constructivamente un controlador tal que que las trayectorias del sistema de bucle cerrado cumplan con una restricción dada. Esta puede darse en la forma de una primera integral deseada o como un sistema dinámico que puede, por ejemplo, representar la evolución del error. Los controladores pueden ser estáticos o dinámicos y el método debería poder aplicase a sistemas implícitos. Los sistemas, sin embargo, deben definirse por polinomios diferenciales y así permitir el uso de los métodos computacionales del álgebra diferencial.
ES En diferentes marcos de diseño de controladores, el proceso de diseño se divide en dos pasos. Primero, un controlador se diseña para hacer invariante una variedad dada que puede venir definida implícitamente por alguna dinámica objetivo. Segundo, otra capa de control se diseña para hacer que la variedad antes mencionada sea atractiva. Nos queremos centrar en el primer paso, tomando la dinámica con parámetros nominales y definir constructivamente un controlador tal que que las trayectorias del sistema de bucle cerrado cumplan con una restricción dada. Esta puede darse en la forma de una primera integral deseada o como un sistema dinámico que puede, por ejemplo, representar la evolución del error. Los controladores pueden ser estáticos o dinámicos y el método debería poder aplicase a sistemas implícitos. Los sistemas, sin embargo, deben definirse por polinomios diferenciales y así permitir el uso de los métodos computacionales del álgebra diferencial.
EN In different controller design frameworks, the design process is divided in two steps. First, a controller is designed to make invariant a given manifold which may be given implicitly by some target dynamics. Second, another control layer is designed to make that manifold attractive. We want to concentrate on the first step, taking the nominal parameters dynamics and constructively producing a controller such that the closed-loop system trajectories comply with a given restriction. This is in the form of a desired first integral or as a dynamical system which may, for example, represent the error evolution. The controllers may be static or dynamic and the method should apply to implicit systems. The systems, though, must be defined by differential polynomials. Thus enabling the use of the computational methods from differential algebra.
Mechanically interlocked derivatives of carbon nanotubes (MINTs) are interesting nanotube products since they show high stability without altering the carbon nanotube structure. So far, MINTs have ...been synthesized using ring‐closing metathesis, disulfide exchange reaction, H‐bonding or direct threading with macrocycles. Here, we describe the encapsulation of single‐walled carbon nanotubes within a palladium‐based metallosquare. The formation of MINTs was confirmed by a variety of techniques, including high‐resolution transmission electron microscopy. We find the making of these MINTs is remarkably sensitive to structural variations of the metallo‐assemblies. When a metallosquare with a cavity of appropriate shape and size is used, the formation of the MINT proceeds successfully by both templated clipping and direct threading. Our studies also show indications on how supramolecular coordination complexes can help expand the potential applications of MINTs.
Metallacycles embrace carbon nanotubes: The encapsulation of SWNTs within a Pd‐based metallosquare featuring N‐heterocyclic‐carbene ligands is described. It is found that the formation of MINTs with this type of metallo‐assembly is very sensitive to structural variations, but with the correct one it proceeds successfully both by “clipping” and “threading” strategies.
Pancreatic ductal adenocarcinoma (PDAC), the fourth leading cause of cancer death, has a 5-year survival rate of approximately 7-9%. The ineffectiveness of anti-PDAC therapies is believed to be due ...to the existence of a subpopulation of tumor cells known as cancer stem cells (CSCs), which are functionally plastic, and have exclusive tumorigenic, chemoresistant and metastatic capacities. Herein, we describe a 2D in vitro system for long-term enrichment of pancreatic CSCs that is amenable to biological and CSC-specific studies. By changing the carbon source from glucose to galactose in vitro, we force PDAC cells to utilize OXPHOS, resulting in enrichment of CSCs defined by increased CSC biomarker and pluripotency gene expression, greater tumorigenic potential, induced but reversible quiescence, increased OXPHOS activity, enhanced invasiveness, and upregulated immune evasion properties. This CSC enrichment method can facilitate the discovery of new CSC-specific hallmarks for future development into targets for PDAC-based therapies.