Property buyout is one of the most frequently preferred flood mitigation applications by decision-makers for long-term risk reduction. Due to its high-level funding requirements as a mitigation ...solution, it requires extensive benefits and costs analysis for the selected region. Many communities in the State of Iowa experienced extreme flood events (i.e., 1993, 2008, 2014, 2019), which resulted in a heavy economic impact over the last couple of decades. Nearly 3000 property acquisitions have been made between 2007 and 2017 using federal programs. This study presents a web-based Flood Risk Assessment and Mitigation Environment (FRAME), which provides visual data analytics capabilities to analyze property and community level benefit-cost analysis for property acquisitions. The FRAME allows users to explore and visualize historical mitigation projects and buyouts, and evaluate avoided damages for their communities. As a case study, a detailed benefit-cost analysis of historical property buyouts and direct losses of existing properties in the Middle Cedar watershed in Iowa is studied using stream gauge data from the United States Geological Survey (USGS). Projected stream gauge datasets, which are outputs of two climate scenarios (A1FI-fossil intensive and A2-low emission), are also utilized to assess future avoided losses for acquisitions and possible direct economic losses for existing properties. Case study results indicate that the average benefit-cost ratio (BCR) for buyouts in the studied region is around 0.86. Nearly half of the buyouts reached 4.72 BCR in low emission and 6.3 BCR in fossil intensive climate projections if future floods are considered.
We present HydroCompute, a high-performance client-side computational library specifically designed for web-based hydrological and environmental science applications. Leveraging state-of-the-art ...technologies in web-based scientific computing, the library facilitates both sequential and parallel simulations, optimizing computational efficiency. Employing multithreading via web workers, HydroCompute enables the porting and utilization of various engines, including WebGPU, Web Assembly, and native JavaScript code. Furthermore, the library supports local data transfers through peer-to-peer communication using WebRTC. The flexible architecture and open-source nature of HydroCompute provide effective data management and decision-making capabilities, allowing users to integrate their own code into the framework. To demonstrate the capabilities of the library, we conducted two case studies: a benchmarking study assessing the performance of different engines and a real-time data processing and analysis application for the state of Iowa. The results exemplify HydroCompute's potential to enhance computational efficiency and contribute to the interoperability and advancement of hydrological and environmental sciences.
•HydroCompute is a web-based high-performance library designed specifically for hydrology and environmental sciences.•Developed to leverage local multithreading in both CPU and GPU, resulting in significantly performance improvements.•The library enables computational efficiency in both sequential and parallel simulations, catering to diverse modeling needs.•Using technologies such as Web Workers, WebAssembly, WebGPU, and WebRTC, the library facilitates efficient data manipulation.•Through the developed case studies, the library demonstrates its relevance and applicability in the field of hydrology.
The height above nearest drainage (HAND) model is frequently used to calculate properties of the soil and predict flood inundation extents. HAND is extremely useful due to its lack of reliance on ...prior data, as only the digital elevation model (DEM) is needed. It is close to optimal, running in linear or linearithmic time in the number of cells depending on the values of the heights. It can predict watersheds and flood extent to a high degree of accuracy. We applied a client-side HAND model on the web to determine extent of flood inundation in several flood prone areas in Iowa, including the city of Cedar Rapids and Ames. We demonstrated that the HAND model was able to achieve inundation maps comparable to advanced hydrodynamic models (i.e., Federal Emergency Management Agency approved flood insurance rate maps) in Iowa, and would be helpful in the absence of detailed hydrological data. The HAND model is applicable in situations where a combination of accuracy and short runtime are needed, for example, in interactive flood mapping and supporting mitigation decisions, where users can add features to the landscape and see the predicted inundation.
Software resource allocation is an significant factor of system configuration which plays a critical role in guaranteeing the performance of multitier web service systems. Computing the optimal ...allocation of different software resources in order to meet performance requirements under dynamic workloads conditions is in highly challenging. Existing approaches mostly rely on translating domain knowledge from experts into computational solutions through heuristics‐based optimization techniques. While such techniques are useful, they cannot leverage actual usage data generated by system users which may contain allocation strategies that are not captured by domain experts' knowledge. In this paper, we propose an iterative feedback mechanism which solves the problem to some extent by optimizing software resource allocation of multitier web systems through imitating system users who have achieved excellent performance. Specifically, we propose a deep Q‐learning network‐based approach for performance prediction to deal with the dynamic changes of complex workloads. The performance prediction method involves the reinforcement learning method for capturing the dynamics of online software resource allocation, and then computing the current optimal policy. We implement the approach in the multitier web benchmark system, and the experimental results demonstrated significant improvement compared to models built based on domain knowledge.
Analysis of large-scale social and information networks Kleinberg, Jon
Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences,
03/2013, Volume:
371, Issue:
1987
Journal Article
Peer reviewed
Open access
The growth of the Web has required us to think about the design of information systems in which large-scale computational and social feedback effects are simultaneously at work. At the same time, the ...data generated by Web-scale systems-recording the ways in which millions of participants create content, link information, form groups and communicate with one another-have made it possible to evaluate long-standing theories of social interaction, and to formulate new theories based on what we observe. These developments have created a new level of interaction between computing and the social sciences, enriching the perspectives of both of these disciplines. We discuss some of the observations, theories and conclusions that have grown from the study of Web-scale social interaction, focusing on issues including the mechanisms by which people join groups, the ways in which different groups are linked together in social networks and the interplay of positive and negative interactions in these networks.
This study aims to identify the most effective input parameters for performance modelling of container-based web systems. We introduce a method using queueing Petri nets to model web system ...performance for containerized structures, leveraging prior measurement data for resource demand estimation. This approach eliminates intrusive interventions in the production system. Our research evaluates the accuracy of various formal estimation methods, pinpointing the most suitable for container environments. With the use of a stock exchange web system benchmark for data collection and simulation verification, our findings reveal that the proposed method ensures precise response time parameter accuracy for such architectural configurations.
Web systems are becoming more and more popular. An efficiently working network system is the basis for the functioning of every enterprise. Performance models are powerful tools for performance ...prediction. The creation of performance models requires significant effort. In the article, we want to present various performance models of customer and Web systems. In particular, we want to examine a system behaviour related to different flow routes of clients in the system. Therefore we propose Queueing Petri Nets, the new modeling methodology for dealing with performance issues of production systems. We follow the simulation-based approach. We consider 25 different models to check performance. Then we evaluate them based on the proposed metrics. The validation results show that the model is able to predict the performance with a relative error lower than 20%. Our evaluation shows that prepared models can reduce the effort of production system preparation. The resulting performance model can predict the system behaviour in a particular layer at the indicated load.
Extreme hydrological phenomena are one of the most common causes of human life loss and material damage as a result of the manifestation of natural hazards around human communities. Climatic changes ...have directly impacted the temporal distribution of previously known flood events, inducing significantly increased frequency rates as well as manifestation intensities. Understanding the occurrence and manifestation behavior of flood risk as well as identifying the most common time intervals during which there is a greater probability of flood occurrence should be a subject of social priority, given the potential casualties and damage involved. However, considering the numerous flood analysis models that have been currently developed, this phenomenon has not yet been fully comprehended due to the numerous technical challenges that have arisen. These challenges can range from lack of measured field data to difficulties in integrating spatial layers of different scales as well as other potential digital restrictions.The aim of the current book is to promote publications that address flood analysis and apply some of the most novel inundation prediction models, as well as various hydrological risk simulations related to floods, that will enhance the current state of knowledge in the field as well as lead toward a better understanding of flood risk modeling. Furthermore, in the current book, the temporal aspect of flood propagation, including alert times, warning systems, flood time distribution cartographic material, and the numerous parameters involved in flood risk modeling, are discussed.
Cloud computing systems revolutionized the Internet, and web systems in particular. Quality of service is the basis of resource configuration management in the cloud. Load balancing mechanisms are ...implemented in order to reduce costs and increase the quality of service. The usage of those methods with adaptive intelligent algorithms can deliver the highest quality of service. In this article, the method of load distribution using neural networks to estimate service times is presented. The discussed and conducted research and experiments include many approaches, among others, application of a single artificial neuron, different structures of the neural networks, and different inputs for the networks. The results of the experiments let us choose a solution that enables effective load distribution in the cloud. The best solution is also compared with other intelligent approaches and distribution methods often used in production systems.
Cloud-computing web systems and services revolutionized the web. Nowadays, they are the most important part of the Internet. Cloud-computing systems provide the opportunity for businesses to undergo ...digital transformation in order to improve efficiency and reduce costs. The sudden shutdown of schools and offices during the pandemic of Covid 19 significantly increased the demand for cloud solutions. Load balancing and sharing mechanisms are implemented in order to reduce the costs and increase the quality of web service. The usage of those methods with adaptive intelligent algorithms can deliver the highest and a predictable quality of service. In this article, a new HTTP request-distribution method in a two-layer architecture of a cluster-based web system is presented. This method allows for the provision of efficient processing and predictable quality by servicing requests in adopted time constraints. The proposed decision algorithms utilize fuzzy-neural models allowing service times to be estimated. This article provides a description of this new solution. It also contains the results of experiments in which the proposed method is compared with other intelligent approaches such as Fuzzy-Neural Request Distribution, and distribution methods often used in production systems.