Cognitive radio ad hoc networks (CRAHNs) constitute a viable solution to solve the current problems of inefficiency in the spectrum allocation, and to deploy highly reconfigurable and self-organizing ...wireless networks. Cognitive radio (CR) devices are envisaged to utilize the spectrum in an opportunistic way by dynamically accessing different licensed portions of the spectrum. To this aim, most of the recent research has mainly focused on devising spectrum sensing and sharing algorithms at the link layer, so that CR devices can operate without interfering with the transmissions of other licensed users, also called primary users (PUs). However, it is also important to consider the impact of such schemes on the higher layers of the protocol stack, in order to provide efficient end-to-end data delivery. At present, routing and transport layer protocols constitute an important yet not deeply investigated area of research over CRAHNs. This paper provides three main contributions on the modeling and performance evaluation of end-to-end protocols (e.g. routing and transport layer protocols) for CRAHNs. First, we describe NS2-CRAHN, an extension of the NS-2 simulator, which is designed to support realistic simulation of CRAHNs. NS2-CRAHN contains an accurate yet flexible modeling of the activities of PUs and of the cognitive cycle implemented by each CR user. Second, we analyze the impact of CRAHNs characteristics over the route formation process, by considering different routing metrics and route discovery algorithms. Finally, we study TCP performance over CRAHNs, by considering the impact of three factors on different TCP variants: (i) spectrum sensing cycle, (ii) interference from PUs and (iii) channel heterogeneity. Simulation results highlight the differences of CRAHNs with traditional ad hoc networks and provide useful directions for the design of novel end-to-end protocols for CRAHNs.
•Analysis of effect of neuronal leak in spiking neural network models.•Leak enhances model robustness by cutting off some high frequency components from input.•Leaky model also decreases sparsity of ...computation.•Analysis relating leak to input frequency component showing lowpass filtering of leak
Spiking Neural Networks (SNNs) are being explored to emulate the astounding capabilities of human brain that can learn to perform robust and efficient computations with noisy spikes. A variety of spiking neuron models have been proposed to resemble biological neuronal functionalities. The simplest and most commonly used among these SNNs are leaky-integrate-and-fire (LIF), which contain a leak path in their membrane potential and integrate-and-fire (IF), where the leakage path is absent. While the LIF models have been argued as more bio-plausible, a comparative analysis between models with and without leak from a purely computational point of view demands attention, which we try to address in this paper. Our results reveal that LIF model provides improved robustness and better generalization compared to IF. Frequency domain analysis demonstrates that leak aids in eliminating high-frequency components from the input, thus enhancing noise-robustness of SNNs. Additionally, we compare the sparsity of computation between these models. In general, for the same input, the LIF model would be expected to achieve higher sparsity compared to IF due to the layer-wise decay of spikes caused by membrane potential leak with time. However, contrary to this expectation, we observe that leak decreases the sparsity of computation. Therefore, there exists a trade-off between robustness and energy-efficiency in SNNs which can be optimized through suitable choice of amount of leak in the models.
Software-defined radio (SDR) allows the unprecedented levels of flexibility by transitioning the radio communication system from a rigid hardware platform to a more user-controlled software paradigm. ...However, it can still be time-consuming to design and implement such SDRs as they typically require thorough knowledge of the operating environment and a careful tuning of the program. In this paper, our contribution is the design of a bidirectional transceiver that runs on the commonly used USRP platform and implemented in MATLAB using standard tools like MATLAB Coder and MEX to speed up the processing steps. We outline strategies on how to create a state-action-based design, wherein the same node switches between transmitter and receiver functions. Our design allows the optimal selection of the parameters toward meeting the timing requirements set forth by various processing blocks associated with a differential binary phase shift keying physical layer and CSMA/CA/ACK MAC layer, so that all the operations remain functionally compliant with the IEEE 802.11b standard for the 1 Mb/s specification. The code base of the system is enabled through the Communications System Toolbox and incorporates channel sensing and exponential random back-off for contention resolution. The current work provides an experimental testbed that enables the creation of new MAC protocols starting from the fundamental IEEE 802.11b standard. Our design approach guarantees consistent performance of the bi-directional link, and the three-node experimental results demonstrate the robustness of the system in mitigating packet collisions and enforcing fairness among nodes, making it a feasible framework in higher layer protocol design.
Novel machine learning models (MLMs) using the seasonal indexing approach that captures the variation in air quality caused due to meteorological changes have been used to provide short-term, ...real-time forecasts of PM
2.5
concentration for one of the most polluted air quality control regions (AQCR) in the capital city of Delhi. Two MLMs—multi-linear regression and random forest—have been developed for using time series data for 1-h and 24-h average PM
2.5
concentration. Short-term, real-time forecasts have been made using the developed models. Various model performance evaluation indices indicate satisfactory model performance.
R
2
values for the hourly and daily models varied between 0.95 and 0.72 and between 0.76 and 0.68 for the 1st to 5th h/day, respectively. The lagged values of PM
2.5
concentration (persistence) and the hourly and daily indices are the most influential variables for the forecasts for immediate time steps. In contrast, seasonal indices become more important with the forecasting time horizon. The developed models can be used for making short-term, real-time air quality forecasts and issuing a warning when the pollution levels go beyond acceptable limits.
Syntax is usually studied in the realm of linguistics and refers to the arrangement of words in a sentence. Similarly, an image can be considered as a visual 'sentence', with the semantic parts of ...the image acting as 'words'. While visual syntactic understanding occurs naturally to humans, it is interesting to explore whether deep neural networks (DNNs) are equipped with such reasoning. To that end, we alter the syntax of natural images (e.g. swapping the eye and nose of a face), referred to as 'incorrect' images, to investigate the sensitivity of DNNs to such syntactic anomaly. Through our experiments, we discover an intriguing property of DNNs where we observe that state-of-the-art convolutional neural networks, as well as vision transformers, fail to discriminate between syntactically correct and incorrect images, when trained on only correct ones. To counter this issue and enable visual syntactic understanding with DNNs, we propose a three-stage framework- (i) the 'words' (or the sub-features) in the image are detected, (ii) the detected words are sequentially masked and reconstructed using an autoencoder, (iii) the original and reconstructed parts are compared at each location to determine syntactic correctness. The reconstruction module is trained with BERT-like masked autoencoding for images, with the motivation to leverage language model inspired training to better capture the syntax. Note, our proposed approach is unsupervised in the sense that the incorrect images are only used during testing and the correct versus incorrect labels are never used for training. We perform experiments on CelebA, and AFHQ datasets and obtain classification accuracy of 92.10%, and 90.89%, respectively. Notably, the approach generalizes well to ImageNet samples which share common classes with CelebA and AFHQ without explicitly training on them.
To facilitate the automation process in the Internet of Things, the research issue of distinguishing prospective services out of many "similar" services, and identifying needed services w.r.t the ...criteria of Quality of Service (QoS), becomes very important. To address this aim, we propose heuristic optimization, as a robust and efficient approach for solving complex real world problems. Accordingly, this paper devises a cooperative evolution approach for service composition under the restrictions of QoS. A series of effective strategies are presented for this problem, which include an enhanced local best first strategy and a global best strategy that introduces perturbations. Simulation traces collected from real measurements are used for evaluating the proposed algorithms under different service composition scales that indicate that the proposed cooperative evolution approach conducts highly efficient search with stability and rapid convergence. The proposed algorithm also makes a well-designed trade-off between the population diversity and the selection pressure when the service compositions occur on a large scale.
•Stroke-based word segmentation approach for online handwritten Bangla words is proposed.•It segments the connected strokes which are written without pen lifting with ease.•Busy zone is formed over ...stroke sample and sub-zoning scheme within busy zone applied to handle poorly aligned strokes.•Modified down->up->down strategy is applied after adopting sub-zoning scheme.•Recognition of segmented strokes is performed to validate the segmentation result with help of Distance based feature.
Display omitted
In the present work, we have proposed a novel Bangla word segmentation technique that is based on stroke-level busy zone formation procedure. In an unconstrained domain, people often write text where strokes may be poorly aligned (due to multi-directional skewness) and varied combination of strokes with various types of joining between them are possible while forming the words. Hence, a segmentation approach for stroke extraction is pertinent for any stroke-based word recognition system. The presence of a large volume of symbols set (58 basic symbols with more than 280 compound characters) in Bangla script makes the task more challenging. In the current experiment, our stroke-level segmentation approach effectively handles such type of Bangla words. A sub-zoning scheme within busy zone followed by a modified Down->Up->Down (DUD) concept within these sub-zones has been used to find valid segmentation points. This scheme avoids over and under-segmentation issues caused by either inherent writing pattern or due to writing style variations up to certain extent. The proposed segmentation approach has been tested on 6500 online handwritten Bangla word samples with 98.45% correct segmentation accuracy (compared with manually generated ground truth of the same database).
Syntax is usually studied in the realm of linguistics and refers to the arrangement of words in a sentence. Similarly, an image can be considered as a visual 'sentence', with the semantic parts of ...the image acting as 'words'. While visual syntactic understanding occurs naturally to humans, it is interesting to explore whether deep neural networks (DNNs) are equipped with such reasoning. To that end, we alter the syntax of natural images (e.g. swapping the eye and nose of a face), referred to as 'incorrect' images, to investigate the sensitivity of DNNs to such syntactic anomaly. Through our experiments, we discover an intriguing property of DNNs where we observe that state-of-the-art convolutional neural networks, as well as vision transformers, fail to discriminate between syntactically correct and incorrect images when trained on only correct ones. To counter this issue and enable visual syntactic understanding with DNNs, we propose a three-stage framework- (i) the 'words' (or the sub-features) in the image are detected, (ii) the detected words are sequentially masked and reconstructed using an autoencoder, (iii) the original and reconstructed parts are compared at each location to determine syntactic correctness. The reconstruction module is trained with BERT-like masked autoencoding for images, with the motivation to leverage language model inspired training to better capture the syntax. Note, our proposed approach is unsupervised in the sense that the incorrect images are only used during testing and the correct versus incorrect labels are never used for training. We perform experiments on CelebA, and AFHQ datasets and obtain classification accuracy of 92.10%, and 90.89%, respectively. Notably, the approach generalizes well to ImageNet samples which share common classes with CelebA and AFHQ without explicitly training on them.
Over-the-air analog computation allows offloading computation to the wireless environment through carefully constructed transmitted signals. In this paper, we design and implement the ...first-of-its-kind convolution that uses over-the-air computation and demonstrate it for inference tasks in a convolutional neural network (CNN). We engineer the ambient wireless propagation environment through reconfigurable intelligent surfaces (RIS) to design such an architecture, which we call 'AirNN'. AirNN leverages the physics of wave reflection to represent a digital convolution, an essential part of a CNN architecture, in the analog domain. In contrast to classical communication, where the receiver must react to the channel-induced transformation, generally represented as finite impulse response (FIR) filter, AirNN proactively creates the signal reflections to emulate specific FIR filters through RIS. AirNN involves two steps: first, the weights of the neurons in the CNN are drawn from a finite set of channel impulse responses (CIR) that correspond to realizable FIR filters. Second, each CIR is engineered through RIS, and reflected signals combine at the receiver to determine the output of the convolution. This paper presents a proof-of-concept of AirNN by experimentally demonstrating convolutions with over-the-air computation. We then validate the entire resulting CNN model accuracy via simulations for an example task of modulation classification.
Transformers have shown dominant performance across a range of domains including language and vision. However, their computational cost grows quadratically with the sequence length, making their ...usage prohibitive for resource-constrained applications. To counter this, our approach is to divide the whole sequence into segments and apply attention to the individual segments. We propose a segmented recurrent transformer (SRformer) that combines segmented (local) attention with recurrent attention. The loss caused by reducing the attention window length is compensated by aggregating information across segments with recurrent attention. SRformer leverages Recurrent Accumulate-and-Fire (RAF) neurons' inherent memory to update the cumulative product of keys and values. The segmented attention and lightweight RAF neurons ensure the efficiency of the proposed transformer. Such an approach leads to models with sequential processing capability at a lower computation/memory cost. We apply the proposed method to T5 and BART transformers. The modified models are tested on summarization datasets including CNN-dailymail, XSUM, ArXiv, and MediaSUM. Notably, using segmented inputs of varied sizes, the proposed model achieves \(6-22\%\) higher ROUGE1 scores than a segmented transformer and outperforms other recurrent transformer approaches. Furthermore, compared to full attention, the proposed model reduces the computational complexity of cross attention by around \(40\%\).