The rapid expansion of the world’s population has resulted in an increased demand for agricultural products which necessitates the need to improve crop yields. To enhance crop yields, it is ...imperative to control weeds. Traditionally, weed control predominantly relied on the use of herbicides; however, the indiscriminate application of herbicides presents potential hazards to both crop health and productivity. Fortunately, the advent of cutting-edge technologies such as unmanned vehicle technology (UAVs) and computer vision has provided automated and efficient solutions for weed control. These approaches leverage drone images to detect and identify weeds with a certain level of accuracy. Nevertheless, the identification of weeds in drone images poses significant challenges attributed to factors like occlusion, variations in color and texture, and disparities in scale. The utilization of traditional image processing techniques and deep learning approaches, which are commonly employed in existing methods, presents difficulties in extracting features and addressing scale variations. In order to address these challenges, an innovative deep learning framework is introduced which is designed to classify every pixel in a drone image into categories such as weed, crop, and others. In general, our proposed network adopts an encoder–decoder structure. The encoder component of the network effectively combines the Dense-inception network with the Atrous spatial pyramid pooling module, enabling the extraction of multi-scale features and capturing local and global contextual information seamlessly. The decoder component of the network incorporates deconvolution layers and attention units, namely, channel and spatial attention units (CnSAUs), which contribute to the restoration of spatial information and enhance the precise localization of weeds and crops in the images. The performance of the proposed framework is assessed using a publicly available benchmark dataset known for its complexity. The effectiveness of the proposed framework is demonstrated via comprehensive experiments, showcasing its superiority by achieving a 0.81 mean Intersection over Union (mIoU) on the challenging dataset.
As billions of IoT devices join the Internet, researchers and innovators increasingly explore IoT capabilities achieved via service composition or reuse of existing capabilities via service ...decomposition. Many systematic literature reviews (SLRs) were produced on this subject; however, two issues remain to be addressed: i) a reference taxonomy of the different aspects of IoT capabilities composition and decomposition is needed, and ii) many formal questions (e.g., standards role, formal representations applications, state-space explosion countermeasures, etc.), technical questions (e.g., composition process types and automation levels synergies, service decomposition categories, the role of AI/ML, etc.), and QoS questions (e.g., privacy, interoperability, and scalability challenges and solutions, etc.) remain unanswered. We introduce this work by discussing notions of IoT capabilities composition and decomposition in a layered IoT architecture while highlighting the strengths and weaknesses of existing SLRs. We identify unanswered questions through gaps in related work and motivate these questions using the PICOC methodology. We explain the search methodology and organize the topic questions using the proposed reference taxonomy. The identified research questions are answered, and trends and gaps that need additional attention from the research community are highlighted. This effort benefits city planners and end-users of IoT systems as it contributes to a better understanding of the role of composition and decomposition of IoT capabilities in building value-added services or reusing existing ones for resource optimization. For researchers, this effort contributes a reference taxonomy for the topic and sheds light on important questions while highlighting corresponding trends and gaps requiring further attention.
With the advancement of social sensing technologies, digital maps have recently witnessed a tremendous evolution with the aim of integrating enriched semantic layers from heterogeneous and diverse ...data sources. Current generations of digital maps are often crowd-sourced, allow interactive route planning, and may contain live updates, such as traffic congestion states. Within this context, we believe that the next generation of maps will introduce the concept of extracting
Events of Interest
(EoI) from crowdsourced data, and displaying them at different spatial scales based on their significance. This paper introduces
Hadath
1
, a scalable and efficient system that extracts social events from unstructured data streams, e.g. Twitter. Hadath applies natural language processing and multi-dimensional clustering techniques to extract relevant events of interest at different map scales, and to infer the spatio-temporal scope of detected events. Hadath also implements a hierarchical in-memory spatio-temporal indexing scheme to allow efficient and scalable access to raw data, as well as to extracted clusters of events. Initially, data packets are processed to discover events at a local scale, then, the proper spatio-temporal scope and the significance of detected events at a global scale is determined. As a result, live events can be displayed at different spatio-temporal resolutions, thus allowing a smooth and unique browsing experience. Finally, to validate our proposed system, we conducted experiments on real-time and historical social media streams.
Today's cities generate tremendous amounts of data, thanks to a boom in affordable smart devices and sensors. The resulting big data creates opportunities to develop diverse sets of context-aware ...services and systems, ensuring smart city services are optimized to the dynamic city environment. Critical resources in these smart cities will be more rapidly deployed to regions in need, and those regions predicted to have an imminent or prospective need. For example, crime data analytics may be used to optimize the distribution of police, medical, and emergency services. However, as smart city services become dependent on data, they also become susceptible to disruptions in data streams, such as data loss due to signal quality reduction or due to power loss during data collection. This paper presents a dynamic network model for improving service resilience to data loss. The network model identifies statistically significant shared temporal trends across multivariate spatiotemporal data streams and utilizes these trends to improve data prediction performance in the case of data loss. Dynamics also allow the system to respond to changes in the data streams such as the loss or addition of new information flows. The network model is demonstrated by city-based crime rates reported in Montgomery County, MD, USA. A resilient network is developed utilizing shared temporal trends between cities to provide improved crime rate prediction and robustness to data loss, compared with the use of single city-based auto-regression. A maximum improvement in performance of 7.8 % for Silver Spring is found and an average improvement of 5.6 % among cities with high crime rates. The model also correctly identifies all the optimal network connections, according to prediction error minimization. City-to-city distance is designated as a predictor of shared temporal trends in crime and weather is shown to be a strong predictor of crime in Montgomery County.
The processes of digital transformation have involved a variety of socio-technical activities, with the objective of increasing productivity, safety and quality of execution, sustainable development, ...collaborative working and solutions for the sustainable smart city. The major digital trends, changing the building sector and revealing new trends of understanding information technologies to integrate in this sector. Current smart building management systems incorporate a variety of sensors, actuators and dedicated networks. Their objectives are to observe the condition of specific areas and apply appropriate rules to preserve or improve comfort while saving energy. In this paper, we propose a review of related works to IoT, Big Data Analytics in smart buildings.
The automated classification of gastrointestinal endoscopy images holds immense importance in modern health care. It streamlines the diagnostic process by enabling faster and more accurate ...identification of gastrointestinal diseases. While the existing automated methods have demonstrated promising performance, there still remains a gap in consistently achieving high accuracy. This is due to reason that endoscopy images suffer from inter-class similarities and intra-class differences, which complicates the classification task. To address these problems, we propose a framework for endoscopy image classification. In general, the proposed framework comprises three essential modules. The first module is the Local-Global Convolutional Neural Network (LG-CNN) which aims to extract both local fine-grained features and captures global context, second module is the Endoscopy-Lesion Attention Module (ELA) that enables the framework to emphasize more crucial regions and filter out noises and other irreverent information. Finally, the last module, Gastrointestinal Endoscopy CNN (GE-CNN) leverages the above two modules in a effective way to classify the input image into various categories. We evaluate the performance of proposed framework on two publicly available challenging datasets, namely, Kvasir, and HyperKvasir. Based on the experimental results, we illustrate the efficacy of the proposed framework in effectively classifying endoscopy images.
•Inter-class similarities and intra-class differences complicates image classification task.•Fine-grained features and global context information is important for accurate classification.•For accurate diagnosis, attention mechanisms are crucial to focus on relevant image regions.
In natural systems, many species are able to coordinate their individual intelligences to perform complex tasks. Swarm Intelligence also denotes the concept of distributed intelligence which relies ...on the coordination between a massive number of individual intelligences. In this paper, we develop a logical approach to consider distributed intelligence. Our proposal is based on a pedestrian safety issue. So, we extend the bases of a distributed non-monotonic theory to model the “vehicle-pedestrian” interaction system. This extension concerns the preliminary approach proposed in our previous work. Non-monotonic reasoning processes for distributed intelligent logic agents strongly contribute to the development of Pedestrian Collision Warning Systems. These systems benefit greatly from the efficiency of pedestrian detection which has reached a high level thanks to several technologies. We demonstrate the validity of our construction and we specify the knowledge representation of driver-pedestrian interaction in the context of the proposed theory.
For decades, numerous names have been given to boost an urban city: digital city, green city, smart cities and the list goes on. They are all accompanied with ideas and propositions to enrich ...citizens life quality, by employing latest information technology to improve environment’s sustainability, through better energy usage, targeting problems affecting infrastructure costs, automation and efficient human resources distribution. Consequently, cities governors provide plans and conceive laws so society including individuals and organizations collaborate in a cycle of providers and consumers to make steps ahead toward smarti-fication of the city in which they all operate. Hence, hundreds of cities around the world are living example of what a smart cities could be resembling in terms of information technology advancement and everyday usage. Each application or to be general system serve and exist for a specific purpose, using mobile applications and small sensors together to cooperate and deliver a value imposing a huge economical and social value and significant source of data. However, most of this applications are tied to specific domains and solely designed to solve predefined problems. Thus, for a decision maker point of view, decisions’ the cost becomes high to correlate multiple data flow in different shapes. As a solution, in this paper we propose a system that is based on abstracting city events of different backgrounds—social, urban and natural, we chose to call them complex space time events. Furthermore, we present its architecture and how it plays with its external actors, and finally, we explain a use case instance made specifically to counter Covid19 pandemic spread and retain public order.
In this paper, we present necessary premises for the deployment of the Internet of Vehicles (IoV) integrating Big Data analytics of road network traffic measurements of the city of Mohammedia, ...Morocco. Thus, we introduce an architecture based on three main layers such as IoV, Fog Computing and Cloud Computing Layer. We specifically put more focus on Fog Computing layer in which we develop a framework for a real-time collecting and processing events generated by intelligent vehicles as well as visualizing traffic state on each road section. Furthermore, we consider deployment and test of the proposed framework using events retrieved from a Vanets-type micro simulation. Finally, we present and discuss the first obtained results as well as the advantages and limitations of the proposed architecture.
In the context of a 'smart campus', which serves as a specialized version of a smart city designed for educational institutions, there lies the potential to employ cutting-edge technologies like the ...Internet of Things (IoT), artificial intelligence (AI), and big data analytics1. These tools are directed towards establishing a more efficient, sustainable, and comfortable environment for the entire campus community. This study takes a step forward by concentrating on creating an optimal indoor environment within the campus, specifically tailored to individuals with specific environmental sensitivities. In our endeavor, we give precedence to two crucial environmental parameters: the heat index2 and the air quality index3, both known to have a significant impact on individual comfort and well-being. The goal is to optimize these factors, fostering a secure and favorable environment. To accomplish this, we propose the development of predictive models capable of forecasting heat index and air quality index values. By employing three prominent models - Long Short Term Memory (LSTM), Gated Recurrent Units (GRU), and one-dimensional Convolutional Neural Networks (Conv1D) - we seek to determine the suitability of an environment, ultimately enhancing the well-being of those within the campus.