Ever since the deployment of the first-generation of mobile telecommunications, wireless communication technology has evolved at a dramatically fast pace over the past four decades. The upcoming ...fifth-generation (5G) holds a great promise in providing an ultra-fast data rate, a very low latency, and a significantly improved spectral efficiency by exploiting the millimeter-wave spectrum for the first time in mobile communication infrastructures. In the years beyond 2030, newly emerged data-hungry applications and the greatly expanded wireless network will call for the sixth-generation (6G) communication that represents a significant upgrade from the 5G network - covering almost the entire surface of the earth and the near outer space. In both the 5G and future 6G networks, millimeter-wave technologies will play an important role in accomplishing the envisioned network performance and communication tasks. In this paper, the relevant millimeter-wave enabling technologies are reviewed: they include the recent developments on the system architectures of active beamforming arrays, beamforming integrated circuits, antennas for base stations and user terminals, system measurement and calibration, and channel characterization. The requirements of each part for future 6G communications are also briefly discussed.
Currently, within the realm of deep learning-based spatiotemporal fusion algorithms, those that employ solely convolutional operations are unable to efficiently extract the global image information. ...In addition, fusion networks that employ a combination of convolution and transformer neglect the 2-D structure of remote sensing images and the role of their channels during training, resulting in an increased computational cost. The current complex fusion methods introduce noise and disregard the correlation between low fractional rate image's time-varying features and high-resolution image's spatial features. To address these issues, we propose TFNet-a temporal feature extraction network that combines normal and deep convolutions to better extract temporal features while reducing computational costs. Second, we suggest utilizing a convolution-based attention module with a large kernel to replace the transformer (LAM), which facilitates adjustment in both spatial and channel dimensions while preserving the image structure. Furthermore, for improved image fusion, we recommend a two-stage fusion module to merge feature images of various scales. This module for fusion integrates features of varying scales and resolutions from various perspectives, thereby preventing noise inclusions and producing favourable fusion outcomes. In addition, we advocate for the utilization of spatiotemporal fusion techniques on other satellites by introducing a new dataset, SW, which is founded on satellite images from Gaofen-1 and moderate-resolution imaging spectroradiometer.
Due to the limitations of current technology and budget, as well as the influence of various factors, obtaining remote sensing images with high-temporal and high-spatial (HTHS) resolution ...simultaneously is a major challenge. In this paper, we propose the GAN spatiotemporal fusion model Based on multiscale and convolutional block attention module (CBAM) for remote sensing images (MCBAM-GAN) to produce high-quality HTHS fusion images. The model is divided into three stages: multi-level feature extraction, multi-feature fusion, and multi-scale reconstruction. First of all, we use the U-NET structure in the generator to deal with the significant differences in image resolution while avoiding the reduction in resolution due to the limitation of GPU memory. Second, a flexible CBAM module is added to adaptively re-scale the spatial and channel features without increasing the computational cost, to enhance the salient areas and extract more detailed features. Considering that features of different scales play an essential role in the fusion, the idea of multiscale is added to extract features of different scales in different scenes and finally use them in the multi loss reconstruction stage. Finally, to check the validity of MCBAM-GAN model, we test it on LGC and CIA datasets and compare it with the classical algorithm for spatiotemporal fusion. The results show that the model performs well in this paper.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK
With the rapid development of artificial intelligence and Internet of Things (IoT) technology, increasingly edge devices have entered people’s daily lives. However, due to the limited performance of ...edge devices, complex models can affect the response speed and efficiency of the whole system. Existing research still cannot simultaneously satisfy the demand for accuracy and response speed of edge devices. This paper proposes a lightweight and highly accurate object detection model that uses the Transformer to address edge devices’ limited computational capacity and storage space. Specifically, the proposed model adopts the Swin Transformer for multi-scale feature extraction to achieve better global modeling capability. In addition, we propose the Neck module with path aggregation network (PAN), which is designed with a two-feature pyramid structure capable of combining semantic and localization information in order to improve the operational performance by exploiting the underlying location features. A lightweight detection head is then developed using group convolution, fusing the two localization branches and removes the additional decoupling operation. Finally, we conduct comparative experiments on three datasets: the Retail-cabinet dataset, the Roadsign dataset, and the Pascal VOC dataset. Experimental results show that compared with the baseline model, our model achieves an 11.8% improvement in
mAP
on the Retail-cabinet dataset while reducing
Params
and
FLOPs
by 23.19% and 71.50%, respectively. The proposed model effectively reduces the model’s computational complexity and improves detection performance, thereby possessing high practical value. This code is released on
https://github.com/ydlam/ST-YOLOX
.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
Due to the limitations of current technology and budget, a single satellite sensor cannot obtain high spatiotemporal resolution remote sensing images. Therefore, remote sensing image spatio-temporal ...fusion technology is considered as an effective solution and has attracted extensive attention. In the field of deep learning, due to the fixed size of the perception field of a convolutional neural network, it is impossible to model the correlation of global features, and the features extracted only through convolution operation lack the ability to capture long-distance features. At the same time, complex fusion methods cannot better integrate temporal and spatial features. In order to solve these problems, we propose a multistage remote sensing image spatio-temporal fusion model based on texture transformer and convolutional neural network. The model combines the advantages of transformer and convolutional network, uses a lightweight convolution network to extract spatial features and temporal discrepancy features, uses transformer to learn global temporal correlation, and finally, fuses temporal features with spatial features. In order to make full use of the features obtained in different stages, we design a cross-stage adaptive fusion module CSAFM. The module adopts the self-attention mechanism to adaptively integrate the features of different scales while considering the temporal and spatial characteristics. To test the robustness of the model, the experiments are carried out on three datasets of CIA, LGC, and DX. Compared with five typical spatio-temporal fusion algorithms, we obtain excellent results, which prove the superiority of MSFusion model.
Land cover classification has been of great interest as one of the most prominent applications of remote sensing images. The emergence of convolutional neural networks has largely promoted the ...development of land cover classification, but it ignores the positional relationship between pixels. When remotely sensed features have both large intraclass scale differences and interclass similarities, it will result in the problems of fuzzy class boundaries of classification results and misclassification of small samples, which are difficult to be solved by existing methods. Inspired by the recent Transformer network, we propose a self-attentive bilateral network SABNet to alleviate these problems. Its backbone consists of a modified multiscale vision transformer and a stacked convolutional layer for extracting global spatial information and local contextual information. A local embedding module and a coordinate attention fusion module are further proposed in the feature fusion stage to reduce attention distraction and efficiently fuse the high and low features. A stepwise feature fusion module is proposed in the decoder to fully fuse the features extracted from the two branches. Experiments show that our method achieves the best results in mIoU on both Landcover.ai and GID-15 datasets with a similar number of parameters, 91.49% for the Landcover.ai dataset and 64.23% for the GID-15 dataset, compared with existing methods.
In this letter, we proposed an approach to construct multiple-input-multiple-output (MIMO) channel in line-of-sight (LOS) environment. It could be a potential technique for high-speed data ...transmission in fixed broadband wireless access (BWA). Based on Ricean fading channel model, our research is focused on the characteristics of the channel matrix such as the condition number. The initial discussion starts from a 4 times 4 MIMO scheme that the four antennas formed a square array on each side. A design constraint for antenna arrangement as a function of frequency and distance is derived for the LOS MIMO communication. Since rigorous constraint is difficult to achieve in practice, simulation results of the acceptable deviation from optimal design in different cases are presented. Smart geometrical arrangement and multipolarization may also weaken this constraint
Interneurons are involved in the physiological function and the pathomechanism of the spinal cord. Present study aimed to examine and compare the characteristics of Cr+, Calb+ and Parv+ interneurons ...in morphology and distribution by using immunhistochemical and Western blot techniques. Results showed that 1) Cr-Calb presented a higher co-existence rate than that of Cr-Parv, and both of them were higher in the ventral horn than in the dosal horn; 2) Cr+, Calb+ and Parv+ neurons distributing zonally in the superficial dosal horn were small-sized. Parv+ neuronswere the largest, and Cr+ and Calb+ neurons were higher density among them. In the deep dorsal horn, Parv+ neurons were mainly located in nucleus thoracicus and the remaining scatteredly distributed. Cr+ neuronal size was the largest, and Calb+ neurons were the least among three interneuron types; 3) Cr+, Calb+ and Parv+ neurons of ventral horns displayed polygonal, round and fusiform, and Cr+ and Parv+ neurons were mainly distributed in the deep layer, but Calb+ neurons mainly in the superficial layer. Cr+ neurons were the largest, and distributed more in ventral horns than in dorsal horns; 4) in the dorsal horn of lumbar cords, Calb protein levels was the highest, but Parv protein level in ventral horns was the highest among the three protein types. Present results suggested that the morphological characteristics of three interneuron types imply their physiological function and pathomechanism relevance.
Full text
Available for:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK