In the past fifteen years, file sharing of digital cultural works between individuals has been at the center of a number of debates on the future of culture itself. To some, sharing constitutes ...piracy, to be fought against and eradicated. Others see it as unavoidable, and table proposals to compensate for its harmful effects. Meanwhile, little progress has been made towards addressing the real challenges facing culture in a digital world. Sharing starts from a radically different viewpoint, namely that the non-market sharing of digital works is both legitimate and useful. It supports this premise with empirical research, demonstrating that non-market sharing leads to more diversity in the attention given to various works. Taking stock of what we have learned about the cultural economy in recent years, Sharing sets out the conditions necessary for valuable cultural functions to remain sustainable in this context. Our software and datasets can be downloaded from the book site at http://www.sharing-thebook.net. On the same site, the reader can also run our models with adjusted parameters and upload datasets in order to run our algorithms for the study of diversity of attention.
Voor sommigen staat delen gelijk aan piraterij: iets wat moet worden bestreden. Anderen zien het als deel van het moderne leven en proberen de schadelijke effecten zoveel mogelijk te compenseren. Ondertussen wordt er weinig vooruitgang geboekt in het aanpakken van de echte problemen waar- mee de cultuur te maken krijgt wanneer steeds meer mensen kunnen bijdragen aan het produceren van cultureel waardevolle werken. Sharing stelt dat het niet-commercieel delen van digitale werken zowel legitiem als nuttig is, omdat het leidt tot meer diversiteit in de digitale culturele wereld. Sharing bespreekt nieuwe financieringsregelingen die geschikt zijn voor een digitale culturele sfeer waar werken vrijelijk gedeeld kunnen worden door individuen. Voor meer informatie over het boek, bezoek: www.sharing-thebook.net
This work introduces novel methods for conducting forensic analysis of file allocation traces, collectively called digital stratigraphy. These in‐depth forensic analysis methods can provide insight ...into the origin, composition, distribution, and time frame of strata within storage media. Using case examples and empirical studies, this paper illuminates the successes, challenges, and limitations of digital stratigraphy. This study also shows how understanding file allocation methods can provide insight into concealment activities and how real‐world computer usage can complicate digital stratigraphy. Furthermore, this work explains how forensic analysts have misinterpreted traces of normal file system behavior as indications of concealment activities. This work raises awareness of the value of taking the overall context into account when analyzing file system traces. This work calls for further research in this area and for forensic tools to provide necessary information for such contextual analysis, such as highlighting mass deletion, mass copying, and potential backdating.
This book covers an extensive range of topics related to visual cryptography techniques and secure image sharing solutions. It addresses sharing multiple secrets and visual cryptography schemes based ...on the probabilistic reconstruction of the secret image, including pictures in the distributed shares, contrast enhancement techniques, visual cryptography schemes based on different logical operations for combining shared images, cheating prevention, and the alignment problem for image shares. The book also describes practical applications, steganography, and authentication. Case studies demonstrate the effectiveness of the techniques.
Data Clustering Aggarwal, Charu C; Reddy, Chandan K
2014, 2013, 2018-09-03, 2013-08-21, Letnik:
31
eBook
In this book, top researchers from around the world cover the entire area of clustering, from basic methods to more refined and complex data clustering approaches. They pay special attention to ...recent issues in graphs, social networks, and other domains. The book explores the characteristics of clustering problems in a variety of application areas. It also explains how to glean detailed insight from the clustering process--including how to verify the quality of the underlying clusters--through supervision, human intervention, or the automated generation of alternative clusters.
File carving techniques are important in the field of digital forensics. At the same time, the rapid growth in the amount and types of data requires the development of file carving methods in terms ...of capabilities, accuracy, and computational efficiency. However, most of the methods are developed to solve specific tasks and are based on a certain set of assumptions and a priori knowledge about the files to be recovered. There is a lack of research that systematizes methods and structures approaches to identify gaps and determine perspective directions for development, considering the latest advances in information technology and artificial intelligence. The subject matter of this article is the structure, factors, efficiency criteria, methods, and tools of file carving, as well as the current state and tendencies of development of file carving methods. The goal of this study is to systematize knowledge about advanced file carving methods and identify perspective directions for their development. The tasks to be solved are as follows: to identify the main stages of file carving and analyze approaches to their implementation; to build an ontological scheme of file carving; and to identify perspective directions for the development of carving methods. The methods used were literature review, systematization, and summarization. The obtained results are as follows. An ontological scheme for the file carving concept is constructed. The scheme includes the principles, properties, phases, techniques, evaluation criteria, tools used, and factors influencing file carving. The features, limitations, and fields of application of the data recovery methods are provided. It was established that the most widespread approach to file reconstruction is still a manually detailed analysis of the internal structure of files and/or their contents, identifying specific patterns that allow reassembling the sequence of data fragments in the correct order. However, most of the methods do not provide one hundred percent guaranteed results. This article analyzes the current state and prospects of using artificial intelligence methods in the field of digital forensics, particularly for identifying data blocks, clustering, and reconstructing files, as well as restoring the contents of media files with damaged or lost headers. The necessity of having priori information about the file structure or content for successfully carving fragmented data is determined. Conclusions. The scientific novelty of the obtained results is as follows: for the first time, advanced file carving methods are systematized and analyzed by directions of development and the perspectives of using artificial intelligence for identifying data blocks, clustering, and file content restoration; for the first time, an ontological scheme of file carving is constructed, which can be used as a roadmap for developing new advanced systems in the digital forensics field.
We study the robustness of the cμ-rule for the optimal allocation of a resource consisting of one unreliable server to parallel queues with two different classes of customers. The customers in queues ...can be served with respect to a FIFO retrial discipline, when the customers at the heads of queues repeatedly try to occupy the server at a random time. It is proved that for scheduling problems in the system without arrivals, the cμ-rule minimizes the total average cost. For the system with arrivals, it is difficult directly to prove the optimality of the same policy with explicit relations. We derived for an infinite-buffer model a static control policy that also prescribes the service for certain values of system parameters exclusively for the class-i customers if both of the queues are not empty, with the aim to minimize the average cost per unit of time. It is also shown that in a finite buffer case, the cμ-rule fails.
Whether you choose to call it file sharing, copyright infringement, or digital piracy, downloading non-licensed music, movies, software, and other forms of digital content is common worldwide. ...Despite the widespread nature of this phenomenon, much remains unknown about it. Is "digital piracy" truly having a negative impact on society? Does enforcement actually deter would-be downloaders? Gunter examines these issues, providing both an analysis of the historical aspects of copyright and of newly collected data on downloaders. His findings lead into a critique of the pros and cons of the current state of copyright law and enforcement.
A common task in computer forensics is to recover files that lack file system metadata. In the case of searching for file fragments in unallocated space, file carving is the most often used method, ...which is ideal for unfragmented data. However, such methods and the tools based on them are ineffective for recovering OOXML files with a high fragmentation level. These methods do not provide reliable determination of the correct order of fragments. Techniques for reconstructing documents based on the analysis of words and phrases are also ineffective in fragmented OOXML documents. The main reason is that OOXML files are ZIP archives and, as a result, store data on disk space in a compressed form. This paper proposes a syntactical method for reconstructing OOXML documents based on knowledge about the internal structure of this file type, regardless of their content. The details of the implementation of the reconstruction algorithm and the peculiarities of restoring certain types of local elements of the document were considered. The efficiency of the algorithm was tested on the Govdocs1 and NapierOne datasets. The proposed method was applied to 4096-byte data blocks, which correspond to the standard cluster size in different file systems. The experimental results confirmed the method's suitability for practical use with 82.97 % of recovered files, including 34.38 % reconstructed completely, 0.43 % excluding the last 21 bytes at most, and another 48.16 % excluding embeddings that require other approaches. In the latter case, obtaining a fully working document without displaying graphic images and other contents of different embeddings is possible. The presence in OOXML files of CRC-32 hashes of the uncompressed data stream of each local element allows us to confirm the correctness of information recovery and its integrity unambiguously. Simultaneously, the method's effectiveness depends mainly on data verification methods during the reconstruction of local elements that occupy at least three clusters in the file. Therefore, this method is supposed to be improved by developing new mechanisms for verifying XML elements.
In the Interplanetary File System (IPFS), consumers can help each other protect data against hardware failures and improve data availability through replication. While previous replication methods in ...peer-to-peer (P2P) networks can be used to increase data availability in the IPFS network, they are either hostile to peers with limited availability, preventing them from achieving adequate data availability, or lack flexibility. An ideal replication method should optimize data availability in a manner equitable to all peers while providing flexibility. To achieve this goal, this paper introduces a blockchain-based file replication mechanism. Leveraging the non-temperable and traceable nature of blockchain technology, our mechanism achieves secure storage and trustworthy query of peers' information used in the file replication process. Unlike most earlier methods, our mechanism employs an Arweave-inspired file replication algorithm that prioritizes the less available files within the system for replication until all files' availabilities are optimized. Replicating files according to predefined system-wide cooperation rules like this not only limits the selfishness of peers but also facilitates timely adjustments in response to changes in the P2P system. In addition, our mechanism also uses smart contracts to judge and exclude dishonest peers, thereby fostering honest cooperation among peers without involving any third party.