Image segmentation based on superpixel is used in urban and land cover change detection for fast locating region of interest. However, the segmentation algorithms often degrade due to speckle noise ...in synthetic aperture radar images. In this paper, a feature learning method using a stacked contractive autoencoder (sCAE) is presented to extract the temporal change feature from superpixel with noise suppression. First, an affiliated temporal change image, which obtains temporal difference in the pixel level, are built by three different metrics. Second, the simple linear iterative clustering algorithm is used to generate superpixels, which tightly adhere to the change image boundaries for the purpose of acquiring homogeneous change samples. Third, a sCAE network is trained with the superpixel samples as input to learn the change features in semantic. Then, the encoded features by this sCAE model are binary classified to create the change result map. Finally, the proposed method is compared with methods based on principal components analysis and Markov random fields. Experiment results show that our deep learning model can separate nonlinear noise efficiently from change features and obtain better performance in change detection for synthetic aperture radar images than conventional change detection algorithms.
The identification and resolution technology are the prerequisite for realizing identity consistency of physical-cyber space mapping in the Internet of Things (IoT). Face, as a distinctive noncoded ...and unstructured identifier, has especial advantages in identification applications. With the increase of face identification based applications, the requirements for computation, communication, and storage capability are becoming higher and higher. To solve this problem, we propose a fog computing based face identification and resolution scheme. Face identifier is first generated by the identification system model to identify an individual. Then, a fog computing based resolution framework is proposed to efficiently resolve the individual's identity. Some computing overhead is offloaded from a cloud to network edge devices in order to improve processing efficiency and reduce network transmission. Finally, a prototype system based on local binary patterns (LBP) identifier is implemented to evaluate the scheme. Experimental results show that this scheme can effectively save bandwidth and improve efficiency of face identification and resolution.
Due to the recent proliferation of cyber-attacks, improving the robustness of wireless sensor networks (WSNs), so that they can withstand node failures has become a critical issue. Scale-free WSNs ...are important, because they tolerate random attacks very well; however, they can be vulnerable to malicious attacks, which particularly target certain important nodes. To address this shortcoming, this paper first presents a new modeling strategy to generate scale-free network topologies, which considers the constraints in WSNs, such as the communication range and the threshold on the maximum node degree. Then, ROSE, a novel robustness enhancing algorithm for scale-free WSNs, is proposed. Given a scale-free topology, ROSE exploits the position and degree information of nodes to rearrange the edges to resemble an onion-like structure, which has been proven to be robust against malicious attacks. Meanwhile, ROSE keeps the degree of each node in the topology unchanged such that the resulting topology remains scale-free. The extensive experimental results verify that our new modeling strategy indeed generates scale-free network topologies for WSNs, and ROSE can significantly improve the robustness of the network topologies generated by our modeling strategy. Moreover, we compare ROSE with two existing robustness enhancing algorithms, showing that ROSE outperforms both.
Chaotic time series widely exists in nature and society (e.g., meteorology, physics, economics, etc.), which usually exhibits seemingly unpredictable features due to its inherent nonstationary and ...high complexity. Thankfully, multifarious advanced approaches have been developed to tackle the prediction issues, such as statistical methods, artificial neural networks (ANNs), and support vector machines. Among them, the interval type-2 fuzzy neural network (IT2FNN), which is a synergistic integration of fuzzy logic systems and ANNs, has received wide attention in the field of chaotic time series prediction. This paper begins with the structural features and superiorities of IT2FNN. Moreover, chaotic characters identification and phase-space reconstruction matters for prediction are presented. In addition, we also offer a comprehensive review of state-of-the-art applications of IT2FNN, with an emphasis on chaotic time series prediction and summarize their main contributions as well as some hardware implementations for computation speedup. Finally, this paper trends and extensions of this field, along with an outlook of future challenges are revealed. The primary objective of this paper is to serve as a tutorial or referee for interested researchers to have an overall picture on the current developments and identify their potential research direction to further investigation.
The prompt evolution of Internet of Medical Things (IoMT) promotes pervasive in-home health monitoring networks. However, excessive requirements of patients result in insufficient spectrum resources ...and communication overload. Mobile Edge Computing (MEC) enabled 5G health monitoring is conceived as a favorable paradigm to tackle such an obstacle. In this paper, we construct a cost-efficient in-home health monitoring system for IoMT by dividing it into two sub-networks, i.e., intra-Wireless Body Area Networks (WBANs) and beyond-WBANs. Highlighting the characteristics of IoMT, the cost of patients depends on medical criticality, Age of Information (AoI) and energy consumption. For intra-WBANs, a cooperative game is formulated to allocate the wireless channel resources. While for beyond-WBANs, considering the individual rationality and potential selfishness, a decentralized non-cooperative game is proposed to minimize the system-wide cost in IoMT. We prove that the proposed algorithm can reach a Nash equilibrium. In addition, the upper bound of the algorithm time complexity and the number of patients benefiting from MEC is theoretically derived. Performance evaluations demonstrate the effectiveness of our proposed algorithm with respect to the system-wide cost and the number of patients benefiting from MEC.
The TCP/IP stack plays an important role in terms of data transmission, traffic control and address assignment in the Internet of Vehicles (IoV), which has seen phenomenal growth in recent years. ...However, with the increasing technical requirements and daily demand in IoV, the drawbacks of traditional TCP/IP protocols, e.g., weak scalability in large networks, low efficiency in the dense environment and unreliable addressing in high mobility circumstance, become non-trivial especially in vehicular environments. Fortunately, the emerging Named Data Networking (NDN) technology provides a good choice to address the above issues in vehicular environments. Specifically, the introduced content store module in NDN which caches the sent/received contents, can greatly improve the networking performance by suppressing redundancy as well as enriching diversity. With aforementioned motivations, we thoroughly studied previous works on Vehicular Named Data Networking (VNDN) with emphasis on content caching, and then demonstrate the feasibility and necessity for employing NDN in vehicular environments with the help of content caching. Subsequently, we further strengthened the importance of cache selection and replacement strategies in VNDN framework, which is positioned to meet the challenges of data transmission efficiency and resource consumption by leveraging in-network vehicular caching. After that, we engaged in an in-depth survey on the existing cache selection and replacement schemes in VNDN, with their applicability compared. Next, further challenges during caching design were elaborately analyzed considering the specific characteristics of VNDN. Finally, we highlighted the potential research directions which may shine lights on the promising efforts to improve the performance of VNDN content caching.
Due to the random delay, local maximum and data congestion in vehicular networks, the design of a routing is really a challenging task especially in the urban environment. In this paper, a ...distributed routing protocol DGGR is proposed, which comprehensively takes into account sparse and dense environments to make routing decisions. As the guidance of routing selection, a road weight evaluation (RWE) algorithm is presented to assess road segments, the novelty of which lies that each road segment is assigned a weight based on two built delay models via exploiting the real-time link property when connected or historic traffic information when disconnected. With the RWE algorithm, the determined routing path can greatly alleviate the risk of local maximum and data congestion. Specially, in view of the large size of a modern city, the road map is divided into a series of Grid Zones (GZs). Based on the position of the destination, the packets can be forwarded among different GZs instead of the whole city map to reduce the computation complexity, where the best path with the lowest delay within each GZ is determined. The backbone link consisting of a series of selected backbone nodes at intersections and within road segments, is built for data forwarding along the determined path, which can further avoid the MAC contentions. Extensive simulations reveal that compared with some classic routing protocols, DGGR performs best in terms of average transmission delay and packet delivery ratio by varying the packet generating speed and density.
Detecting elliptical objects from an image is a central task in robot navigation and industrial diagnosis, where the detection time is always a critical issue. Existing methods are hardly applicable ...to these real-time scenarios of limited hardware resource due to the huge number of fragment candidates (edges or arcs) for fitting ellipse equations. In this paper, we present a fast algorithm detecting ellipses with high accuracy. The algorithm leverages a newly developed projective invariant to significantly prune the undesired candidates and to pick out elliptical ones. The invariant is able to reflect the intrinsic geometry of a planar curve, giving the value of -1 on any three collinear points and +1 for any six points on an ellipse. Thus, we apply the pruning and picking by simply comparing these binary values. Moreover, the calculation of the invariant only involves the determinant of a 3×3 matrix. Extensive experiments on three challenging data sets with 648 images demonstrate that our detector runs 20%-50% faster than the state-of-the-art algorithms with the comparable or higher precision.
Energy-efficient and robust-time synchronization is crucial for industrial Internet of things (IIoT). Some energy-efficient time synchronization schemes that achieve high accuracy have been proposed ...recently. However, some unsynchronized nodes namely isolated nodes exist in the schemes. To deal with the problem, this paper presents R-Sync, a robust time synchronization scheme for IIoT. We use a pulling timer to pull isolated nodes into synchronized networks whose initial value is set according to level of spanning tree. Then, another timer is set up to select backbone node and its initial value is related to the distance to parent node. Moreover, we do experiments based on simulation tool NS-2 and testbed based on wireless hardware nodes. The experimental results show that our approach makes all the nodes get synchronized and gets the better performance in terms of accuracy and energy consumption, compared with three existing time synchronization algorithms TPSN, GPA, STETS.
Face identification and resolution technology is crucial to ensure the identity consistency of humans in physical space and cyber space. In the current Internet of Things (IoT) and big data ...situation, the increase of applications based on face identification and resolution raises the demands of computation, communication, and storage capabilities. Therefore, we have proposed the fog computing-based face identification and resolution framework to improve processing capacity and save the bandwidth. However, there are some security and privacy issues brought by the properties of fog computing-based framework. In this paper, we propose a security and privacy preservation scheme to solve the above issues. We give an outline of the fog computing-based face identification and resolution framework, and summarize the security and privacy issues. Then the authentication and session key agreement scheme, data encryption scheme, and data integrity checking scheme are proposed to solve the issues of confidentiality, integrity, and availability in the processes of face identification and face resolution. Finally, we implement a prototype system to evaluate the influence of security scheme on system performance. Meanwhile, we also evaluate and analyze the security properties of proposed scheme from the viewpoint of logical formal proof and the confidentiality, integrity, and availability (CIA) properties of information security. The results indicate that the proposed scheme can effectively meet the requirements for security and privacy preservation.