The booming Internet economy and generative artificial intelligence have driven the rapid growth of the digital content trading industry, creating an urgent need for the fair protection of the rights ...of both buyers and sellers. To meet this need, a technique known as buyer–seller watermarking has emerged. Despite its existence, the majority of existing buyer–seller watermarking schemes adopt the owner-side embedding mode, which results in poor scalability. While a handful of schemes adopt the client-side embedding mode to enhance scalability, they either require the deep involvement of a trusted third party or fall short of ensuring complete fairness due to the unresolved unbinding problem. To address these challenges, this paper proposes a fair and scalable watermarking scheme for digital content transactions based on proxy re-encryption and digital signatures. For one thing, this scheme solves the unbinding problem and ensures complete fair protection of the rights of both buyers and sellers. For another, it adopts the client-side embedding mode and has good scalability. Additionally, it eliminates the need for a trusted third party. Finally, theoretical analysis and experiments demonstrate that the proposed scheme achieves the intended design goals and possesses superior efficiency advantages.
•Protecting the rights of both buyers and sellers throughout content transactions.•Offering solutions for the orderly development of the content trading industry.•Client-side embedding is implemented alongside resolving the unbinding problem.•The involvement of a trusted third party is not required.
It is critical to obtain accurate flood extent predictions in a timely manner in order to reduce flood-related casualties and economic losses from floods with greater magnitudes and fall outside the ...handling capacity of our existing mitigation systems. Running a real-time flood inundation mapping model is helpful in supporting quick response decisions for unplanned floods, such as how to distribute limited resources and labor so that the most flood-prone areas receive adequate mitigation efforts and how to execute evacuations that keep people safe while causing the least amount of unneeded disruption. Most inundation systems, on the other hand, are either overly demanding in terms of data and computing power or have limited interaction and customization with various input and model configurations. This paper describes a client-side web-based real-time inundation mapping system based on the Height Above the Nearest Drainage (HAND) model. The system includes tools for hydro-conditioning terrain data, modifying terrain data, custom inundation mapping, online model performance evaluation, and hydro-spatial analyses. Instead of only being able to work on a few preprocessed datasets, the system is ready to run in any region of the world with limited data needs (i.e., elevation). With the system's multi-depth inundation mapping approach, we can use water depth measurements (sensor-based or crowdsourced) or model predictions to generate more accurate flood inundation maps based on current or future conditions. All of the system's functions can be performed entirely through a client-side web browser, without the need for GIS software or server-side computing. For decision-makers and the general public with limited technical backgrounds, the system provides a one-stop, easy-to-use flood inundation modeling and analysis tool.
Display omitted
•A rapid inundation mapping system that works on browser without software download•Client-side computations that do not rely on high performance or server-side computing•Online model performance evaluation and model improvement•Improve the standard HAND model procedure with multi-depth inundation mapping•Valuable functions with hydro-spatial analysis and flood mitigation modules
Summary
Collaborative cloud applications have become the dominant application mode in the big data era. These applications usually generate plenty of cooperative files, which share their ownerships ...with all collaborative participants. Data deduplication is a promising solution to improve the storage efficiency and save the user expenditure. However, it remains an open issue on how to securely prove the shared ownerships for the shared files and address the attacks on account of using data deduplication. To tackle the above issue, in this paper, we introduce a novel concept of the Proof of Shared oWnership (PoSW) and construct a secure multi‐server‐aided PoSW (ms‐PoSW) scheme for securing client‐side deduplication for the shared files, which is based on the convergent encryption, secret sharing, and bloom filter. In the ms‐PoSW scheme, we employ a sharing convergent key to avoid the single point of failure, introduce the secret sharing algorithm to implement the shared ownership, and construct a novel interaction protocol between the shared owners and the cloud server to prove the shared ownership. Furthermore, a hybrid PoSW scheme is constructed to address the secure proof of hybrid cloud architectures. Finally, security analysis and performance evaluation show the security and efficiency of the proposed schemes.
A new class of poisoning attacks has recently emerged targeting the client‐side Domain Name System (DNS) cache. It allows users to visit fake websites unconsciously, thereby revealing their ...information, such as passwords. However, the current DNS defense architecture does not include DNS clients. Although relative encryption solutions can mitigate this attack, they require the cooperation of multiple parties, and the deployment speed is slow. Therefore, we propose an intelligent‐driven proactive defense strategy. First, we model the offensive and defensive process as a stochastic game based on moving target defense. Second, we adopt and optimize Proximal Policy Optimization (PPO), a deep reinforcement learning method, to solve problems caused by uncertain attack strategies and unknown state transition probability. Third, we design a self‐checking component in PPO to solve the uncertainty of action space caused by game state constraints based on our previous work. Thus the convergence speed and stability of PPO are improved. Finally, to the best of our knowledge, we are the first to game with intelligent attackers besides three conventional ones. Our strategy does not require any modifications to the DNS architecture. Through an extensive experimental campaign, the prototype system is proved to be effective against multiple attack modes. Its success rate is 98.5% approximately, and network round‐trip time is about 55 ms. Even for random attackers, our method can achieve the theoretical maximum defensive success rate.
In the rapidly developing digital age, websites have become indispensable for interaction, information dissemination and transaction. To improve the performance of web applications, choosing the ...right rendering technology is critical. Next.js is a framework designed to overcome React's limitations in server-side rendering. This study investigates the effectiveness of Client-side Rendering (CSR), Server-side Rendering (SSR), and Static Site Generation (SSG) on the Next.js-based Filmku website using the loading time method. The study concentrates on page loading speed, complete page rendering speed, and user experience. The authentication page takes 422 ms to complete the CSR process, which is 57.41% slower than the SSG finish time of 180 ms and 34.88% slower than SSR, which completes the authentication page in 274 ms. On the Profile page, SSG completes the page rendering process much faster, taking only 524 ms, which is 25.79% faster than SSR's completion time of 706 ms and even 13.75% faster than CSR's completion time of 608 ms. The SSG rendering method completed in 1,135 ms on the main page, which is 15.93% faster than the CSR completion time of 1,350 ms and 25.57% faster than the SSR completion time of 1,525 ms. It is evident that SSG has a faster rendering speed compared to the other methods. However, it should be noted that CSR may result in slower initial page load times. SSR can provide stable rendering times, but it can also burden the server as every client request is fully processed on the server.
Massive nodes in a blockchain form an off-chain distributed storage network to provide storage resources for users to meet large data upload requirements. However, this storage approach introduces ...security and performance issues. Firstly, it is difficult to guarantee the integrity of the data uploaded, and these data may be easily corrupted or lost. Moreover, uploading excessive duplicate data leads to a waste of storage resources. In this study, to address these issues, with a double-copy storage model for blockchain off-chain storage, a novel public auditing scheme with client-side deduplication is proposed to reduce the storage overhead of nodes and check the integrity of the off-chain data. Based on smart contracts, our scheme could realize efficient user ownership and off-chain data integrity verification automatically. In addition, both data encryption and deduplication are achieved based on message-locked encryption and an improved authenticator generation algorithm. Security analysis and experimental comparisons show that the proposed scheme is effective and practical.
We present HydroCompute, a high-performance client-side computational library specifically designed for web-based hydrological and environmental science applications. Leveraging state-of-the-art ...technologies in web-based scientific computing, the library facilitates both sequential and parallel simulations, optimizing computational efficiency. Employing multithreading via web workers, HydroCompute enables the porting and utilization of various engines, including WebGPU, Web Assembly, and native JavaScript code. Furthermore, the library supports local data transfers through peer-to-peer communication using WebRTC. The flexible architecture and open-source nature of HydroCompute provide effective data management and decision-making capabilities, allowing users to integrate their own code into the framework. To demonstrate the capabilities of the library, we conducted two case studies: a benchmarking study assessing the performance of different engines and a real-time data processing and analysis application for the state of Iowa. The results exemplify HydroCompute's potential to enhance computational efficiency and contribute to the interoperability and advancement of hydrological and environmental sciences.
•HydroCompute is a web-based high-performance library designed specifically for hydrology and environmental sciences.•Developed to leverage local multithreading in both CPU and GPU, resulting in significantly performance improvements.•The library enables computational efficiency in both sequential and parallel simulations, catering to diverse modeling needs.•Using technologies such as Web Workers, WebAssembly, WebGPU, and WebRTC, the library facilitates efficient data manipulation.•Through the developed case studies, the library demonstrates its relevance and applicability in the field of hydrology.
Major commercial client-side video players employ adaptive bitrate (ABR) algorithms to improve the user quality of experience (QoE). With the evolvement of ABR algorithms, increasingly complex ...methods such as neural networks have been adopted to pursue better performance. However, these complex methods are too heavyweight to be directly deployed in client devices with limited resources, such as mobile phones. Existing solutions suffer from a trade-off between algorithm performance and deployment overhead. To make the deployment of sophisticated ABR algorithms practical, we propose PiTree , a general , high-performance , and scalable framework that can faithfully convert sophisticated ABR algorithms into decision trees with teacher-student learning. In this way, network operators can train complex models offline and deploy converted lightweight decision trees online. We also present theoretical analysis on the conversion and provide two upper bounds of the prediction error during the conversion and the generalization loss after conversion. Evaluation on three representative ABR algorithms with both trace-driven emulation and real-world experiments demonstrates that PiTree could convert ABR algorithms into decision trees with < 3% average performance degradation. Moreover, compared to original deployment solutions, PiTree could save considerable operating expenses for content providers.
Abstract Our increasing reliance on digital technology for personal, economic, and government affairs has made it essential to secure the communications and devices of private citizens, businesses, ...and governments. This has led to pervasive use of cryptography across society. Despite its evident advantages, law enforcement and national security agencies have argued that the spread of cryptography has hindered access to evidence and intelligence. Some in industry and government now advocate a new technology to access targeted data: client-side scanning (CSS). Instead of weakening encryption or providing law enforcement with backdoor keys to decrypt communications, CSS would enable on-device analysis of data in the clear. If targeted information were detected, its existence and, potentially, its source would be revealed to the agencies; otherwise, little or no information would leave the client device. Its proponents claim that CSS is a solution to the encryption versus public safety debate: it offers privacy—in the sense of unimpeded end-to-end encryption—and the ability to successfully investigate serious crime. In this paper, we argue that CSS neither guarantees efficacious crime prevention nor prevents surveillance. Indeed, the effect is the opposite. CSS by its nature creates serious security and privacy risks for all society, while the assistance it can provide for law enforcement is at best problematic. There are multiple ways in which CSS can fail, can be evaded, and can be abused.