In recent times, Internet of Things (IoT) has become a hot research topic and it aims at interlinking several sensor-enabled devices mainly for data gathering and tracking applications. Wireless ...Sensor Network (WSN) is an important component in IoT paradigm since its inception and has become the most preferred platform to deploy several smart city application areas like home automation, smart buildings, intelligent transportation, disaster management, and other such IoT-based applications. Clustering methods are widely-employed energy efficient techniques with a primary purpose i.e., to balance the energy among sensor nodes. Clustering and routing processes are considered as Non-Polynomial (NP) hard problems whereas bio-inspired techniques have been employed for a known time to resolve such problems. The current research paper designs an Energy Efficient Two-Tier Clustering with Multi-hop Routing Protocol (EETTC-MRP) for IoT networks. The presented EETTC-MRP technique operates on different stages namely, tentative Cluster Head (CH) selection, final CH selection, and routing. In first stage of the proposed EETTC-MRP technique, a type II fuzzy logic-based tentative CH (T2FL-TCH) selection is used. Subsequently, Quantum Group Teaching Optimization Algorithm-based Final CH selection (QGTOA-FCH) technique is deployed to derive an optimum group of CHs in the network. Besides, Political Optimizer based Multihop Routing (PO-MHR) technique is also employed to derive an optimal selection of routes between CHs in the network. In order to validate the efficacy of EETTC-MRP method, a series of experiments was conducted and the outcomes were examined under distinct measures. The experimental analysis infers that the proposed EETTC-MRP technique is superior to other methods under different measures.
In past decades, retinal diseases have become more common and affect people of all age grounds over the globe. For examining retinal eye disease, an artificial intelligence (AI) based multilabel ...classification model is needed for automated diagnosis. To analyze the retinal malady, the system proposes a multiclass and multi-label arrangement method. Therefore, the classification frameworks based on features are explicitly described by ophthalmologists under the application of domain knowledge, which tends to be time-consuming, vulnerable generalization ability, and unfeasible in massive datasets. Therefore, the automated diagnosis of multi-retinal diseases becomes essential, which can be solved by the deep learning (DL) models. With this motivation, this paper presents an intelligent deep learning-based multi-retinal disease diagnosis (IDL-MRDD) framework using fundus images. The proposed model aims to classify the color fundus images into different classes namely AMD, DR, Glaucoma, Hypertensive Retinopathy, Normal, Others, and Pathological Myopia. Besides, the artificial flora algorithm with Shannon’s function (AFA-SF) based multi-level thresholding technique is employed for image segmentation and thereby the infected regions can be properly detected. In addition, SqueezeNet based feature extractor is employed to generate a collection of feature vectors. Finally, the stacked sparse Autoencoder (SSAE) model is applied as a classifier to distinguish the input images into distinct retinal diseases. The efficacy of the IDL-MRDD technique is carried out on a benchmark multi-retinal disease dataset, comprising data instances from different classes. The experimental values pointed out the superior outcome over the existing techniques with the maximum accuracy of 0.963.
On-line detection of arrhythmia in 12-lead electrocardiogram signals by deep learning models is essential for clinical care. If an 8-byte floating point data type is used to define each sample in a ...12-lead ECG signal, the volume of a Rosent-18 class is 800.4 MB (100.06 M * 8 B). This model is challenging to apply to devices with minimal hardware. Consequently, these models are inadequate for practical purposes, and their utilization is restricted when it comes to low-capacity devices within emerging fields like the Internet of Medical Things. This article introduces a technique that aims to categorize irregularities in 12-lead electrocardiogram signals on edge devices. The method utilizes a lightweight learning approach for the classification of arrhythmias. The evident originality of this work is the use of different evaluations to deploy the suggested model on a device with hardware limitations. After employing the Tensor Flow Lite platform, a compact model has been derived from it. This model has been deployed on an Android device as an edge device, carrying forward from the previous context. According to the assessment, the suggested classification model, designed to categorize 11 different irregularities within the electrocardiogram (ECG) dataset comprising 10,646 patients, achieves an accuracy level comparable to 83.45%. Ultimately, the performed comparisons reveal that the proposed model exhibits competitive performance when compared to alternative approaches that rely on standard deep learning models.
Concerns about fire risk reduction and rescue tactics have been raised in light of recent incidents involving flammable cladding systems and fast fire spread in high-rise buildings worldwide. Thus, ...governments, engineers, and building designers should prioritize fire safety. During a fire event, an emergency evacuation system is indispensable in large buildings, which guides evacuees to exit gates as fast as possible by dynamic and safe routes. Evacuation plans should evaluate whether paths inside the structures are appropriate for evacuations, considering the building’s electric power, electric controls, energy usage, and fire/smoke protection. On the other hand, the Internet of Things (IoT) is emerging as a catalyst for creating and optimizing the supply and consumption of intelligent services to achieve an efficient system. Smart buildings use IoT sensors for monitoring indoor environmental parameters, such as temperature, humidity, luminosity, and air quality. This research proposes a new way for a smart building fire evacuation and control system based on the IoT to direct individuals along an evacuation route during fire incidents efficiently. This research utilizes a hybrid nature-inspired optimization approach, Emperor Penguin Colony, and Particle Swarm Optimization (EPC-PSO). The EPC algorithm is regulated by the penguins’ body heat radiation and spiral-like movement inside their colony. The behavior of emperor penguins improves the PSO algorithm for sooner convergences. The method also uses a particle idea of PSO to update the penguins’ positions. Experimental results showed that the proposed method was executed accurately and effectively by cost, energy consumption, and execution time-related challenges to ensure minimum life and resource causalities. The method has decreased the execution time and cost by 10.41% and 25% compared to other algorithms. Moreover, to achieve a sustainable system, the proposed method has decreased energy consumption by 11.90% compared to other algorithms.
Things receive digital intelligence by being connected to the Internet and by adding sensors. With the use of real-time data and this intelligence, things may communicate with one another ...autonomously. The environment surrounding us will become more intelligent and reactive, merging the digital and physical worlds thanks to the Internet of things (IoT). In this paper, an optimal methodology has been proposed for distinguishing outlier sensors of the Internet of things based on a developed design of a dragonfly optimization technique. Here, a modified structure of the dragonfly optimization algorithm is utilized for optimal area coverage and energy consumption reduction. This paper uses four parameters to evaluate its efficiency: the minimum number of nodes in the coverage area, the lifetime of the network, including the time interval from the start of the first node to the shutdown time of the first node, and the network power. The results of the suggested method are compared with those of some other published methods. The results show that by increasing the number of steps, the energy of the live nodes will eventually run out and turn off. In the LEACH method, after 350 steps, the RED-LEACH method, after 750 steps, and the GSA-based method, after 915 steps, the nodes start shutting down, which occurs after 1227 steps for the proposed method. This means that the nodes are turned off later. Simulations indicate that the suggested method achieves better results than the other examined techniques according to the provided performance parameters.
Celotno besedilo
Dostopno za:
CEKLJ, DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
The most suitable method for assessing bone age is to check the degree of maturation of the ossification centers in the radiograph images of the left wrist. So, a lot of effort has been made to help ...radiologists and provide reliable automated methods using these images. This study designs and tests Alexnet and GoogLeNet methods and a new architecture to assess bone age. All these methods are implemented fully automatically on the DHA dataset including 1400 wrist images of healthy children aged 0 to 18 years from Asian, Hispanic, Black, and Caucasian races. For this purpose, the images are first segmented, and 4 different regions of the images are then separated. Bone age in each region is assessed by a separate network whose architecture is new and obtained by trial and error. The final assessment of bone age is performed by an ensemble based on the Average algorithm between 4 CNN models. In the section on results and model evaluation, various tests are performed, including pre-trained network tests. The better performance of the designed system compared to other methods is confirmed by the results of all tests. The proposed method achieves an accuracy of 83.4% and an average error rate of 0.1%.
Cyber physical systems: A smart city perspective Khan, Firoz; Kumar, R. Lakshmana; Kadry, Seifedine ...
International journal of electrical and computer engineering (Malacca, Malacca),
08/2021, Letnik:
11, Številka:
4
Journal Article
Cyber-physical system (CPS) is a terminology used to describe multiple systems of existing infrastructure and manufacturing system that combines computing technologies (cyber space) into the physical ...space to integrate human interaction. This paper does a literature review of the work related to CPS in terms of its importance in today’s world. Further, this paper also looks at the importance of CPS and its relationship with internet of things (IoT). CPS is a very broad area and is used in variety of fields and some of these major fields are evaluated. Additionally, the implementation of CPS and IoT is major enabler for smart cities and various examples of such implementation in the context of Dubai and UAE are researched. Finally, security issues related to CPS in general are also reviewed.
In 28, a core deep learning (CDL) framework was used to solve the problem of heterogeneous visual versus near-infrared (VIS-NIR) image matching through topic diffusion in networks. ...there is a need ...for a more comprehensive yet less complex automatic method for measuring the similarity of nodes and finding diffusion paths in heterogeneous networks. In this paper, the problem of predicting the path of information diffusion in a network is mapped to a deep learning problem. Since predicting the new users who will be in the path of information flow is a recognition process, this problem can be solved by machine learning algorithms. ...in DBLP, which is an important computer science bibliography database, the vertices could be authors, articles, and venues (journals/conferences) and edges could be the author-author relationship in the sense that they have worked in the same area, and attended the same conferences.
Sentiment Analysis is a current research topic by many researches using supervised and machine learning algorithms. The analysis can be done on movie reviews, twitter reviews, online product reviews, ...blogs, discussion forums, Myspace comments and social networks. The Twitter data set is analyzed using support vector machines (SVM) classifier with various parameters. The content of tweet is classified to find whether it contains fact data or opinion data. The deep analysis is required to find the opinion of the tweets posted by the individual. The sentiment is classified in to positive, negative and neutral. From this classification and analysis, an important decision can be made to improve the productivity. The performance of SVM radial kernel, SVM linear grid and SVM radial grid was compared and found that SVM linear grid performs better than other SVM models.
Received Sep 2, 2019 Revised Jan 18, 2020 Accepted Feb 23, 2020 Keywords: Artificial intelligence Bot creation Natural language processing Search engine Text summarization ABSTRACT Natural language ...processing is the trending topic in the latest research areas, which allows the developers to create the human-computer interactions to come into existence. ...it can be a said that this is the toughest research area for the programmers to design. ...approach presents the most important and relevant content of a page to the user, with respect to the need for grasping the basic information. 3. The architecture of the proposed work start with a graphical user interface design which allows user to input his query as in a general search engine, where the search program fetches and extracts data from multiple search engines like Google, Bing and Yahoo using the spider and robot programs for the given user query and stores in the database.