The book covers the overview of cyberfraud and the associated global statistics. It demonstrates practicable techniques that financial institutions can employ to make effective decisions geared ...towards cyberfraud mitigation. Furthermore, the book contains some emerging technologies, such as information and communication technologies (ICT), forensic accounting, big data technologies, tools and analytics employed in fraud mitigation. In addition, it highlights the implementation of some techniques, such as the fuzzy analytical hierarchy process (FAHP) and system thinking approach to address information and security challenges. The book combines a case study, empirical findings, a systematic literature review and theoretical and conceptual concepts to provide practicable solutions to mitigate cyberfraud. The major contributions of this book include the demonstration of digital and emerging techniques, such as forensic accounting for cyber fraud mitigation. It also provides in-depth statistics about cyber fraud, its causes, its threat actors, practicable mitigation solutions, and the application of a theoretical framework for fraud profiling and mitigation.
Abstract The Benford law is used world-wide for detecting non-conformance or data fraud of numerical data. It says that the significand of a data set from the universe is not uniformly, but ...logarithmically distributed. Especially, the first non-zero digit is One with an approximate probability of 0.3. There are several tests available for testing Benford, the best known are Pearson’s $$\chi ^2$$ χ 2 -test, the Kolmogorov–Smirnov test and a modified version of the MAD-test. In the present paper we propose some tests, three of the four invariant sum tests are new and they are motivated by the sum invariance property of the Benford law. Two distance measures are investigated, Euclidean and Mahalanobis distance of the standardized sums to the orign. We use the significands corresponding to the first significant digit as well as the second significant digit, respectively. Moreover, we suggest inproved versions of the MAD-test and obtain critical values that are independent of the sample sizes. For illustration the tests are applied to specifically selected data sets where prior knowledge is available about being or not being Benford. Furthermore we discuss the role of truncation of distributions.
Fraud in healthcare insurance claims is one of the significant research challenges that affect the growth of the healthcare services. The healthcare frauds are happening through subscribers, ...companies and the providers. The development of a decision support is to automate the claim data from service provider and to offset the patient’s challenges. In this paper, a novel hybridized big data and statistical machine learning technique, named MapReduce based iterative support vector machine (MR-ISVM) that provide a set of sophisticated steps for the automatic detection of fraudulent claims in the health insurance databases. The experimental results have proven that the MR-ISVM classifier outperforms better in classification and detection than other support vector machine (SVM) kernel classifiers. From the results, a positive impact seen in declining the computational time on processing the healthcare insurance claims without compromising the classification accuracy is achieved. The proposed MR-ISVM classifier achieves 87.73% accuracy than the linear (75.3%) and radial basis function (79.98%).
This study aims to better understand transnational computer fraud in Vietnam utilizing crime script analysis. Data from criminal profiles and in-depth interviews with investigators were combined, and ...the results showed that Vietnam could become an operational base for both domestic and foreign criminals to implement transnational computer fraud. This type of fraud, which includes crimes with only minor technological elements and those involving almost entirely technological factors, represents the intersection of fraud, transnationality, and technology. Technology can support criminals in defrauding victims transnationally without the need for direct interaction. Moreover, the study clarified the different roles of Vietnamese and foreign offenders in the two types of transnational computer fraud: bank card data fraud and phone scams. As the first study of this nature implemented in Vietnam, this research contributes to the knowledge of computer fraud, especially in Asia, providing a foundation for future investigations related to this kind of cybercrime.
Recent fraud cases in psychological and medical research have emphasized the need to pay attention to Questionable Research Practices (QRPs). Deliberate or not, QRPs usually have a deteriorating ...effect on the quality and the credibility of research results. QRPs must be revealed but prevention of QRPs is more important than detection. I suggest two policy measures that I expect to be effective in improving the quality of psychological research. First, the research data and the research materials should be made publicly available so as to allow verification. Second, researchers should more readily consider consulting a methodologist or a statistician. These two measures are simple but run against common practice to keep data to oneself and overestimate one’s methodological and statistical skills, thus allowing secrecy and errors to enter research practice.
This paper presents a method for detecting and restoring integer datasets that have been manipulated by operations involving nonintegral real-number multiplication and rounding. As we discuss in the ...paper, detecting and restoring such manipulated integer datasets is not straightforward, nor are there any known solutions. We introduce the manipulation process, which was motivated by an actual case of fraud, and survey several areas of literature dealing with the possibility that manipulation may have happened or might occur. From our mathematical analysis of the manipulation process, we can prove that the nonintegral real number (<inline-formula> <tex-math notation="LaTeX">\alpha </tex-math></inline-formula>) used in the multiplication exists not as a single real number but as an interval containing infinitely many real numbers, any of which could have been used to produce the same manipulation result. Based on these analytic findings, we provide an algorithm that can detect and restore manipulated integer datasets. To validate our algorithm, we applied it to 40,000 test datasets that were randomly generated using controllable parameters that matched the real fraud case. Our results indicated that the algorithm detected and perfectly restored all datasets for which the value of the nonintegral real number was at least 16 (<inline-formula> <tex-math notation="LaTeX">\alpha \geq 16 </tex-math></inline-formula>) and the number of data entries was at least 40 (<inline-formula> <tex-math notation="LaTeX">n\geq 40 </tex-math></inline-formula>).
Linguistic category priming is a novel paradigm to examine automatic influences of language on cognition (Semin, 2008). An initial article reported that priming abstract linguistic categories ...(adjectives) led to more global perceptual processing, whereas priming concrete linguistic categories (verbs) led to more local perceptual processing (Stapel & Semin, 2007). However, this report was compromised by data fabrication by the first author, so that it remains unclear whether or not linguistic category priming influences perceptual processing. To fill this gap in the literature, the present article reports 12 studies among Dutch and US samples examining the perceptual effects of linguistic category priming. The results yielded no evidence of linguistic category priming effects. These findings are discussed in relation to other research showing cultural variations in linguistic category priming effects (IJzerman, Saddlemyer, & Koole, 2014). The authors conclude by highlighting the importance of conducting and publishing replication research for achieving scientific progress.
•Stapel fabricated data that suggested verb (vs. adj.) priming leads to concrete foci.•Prior and after his exposure, we “replicated” these hypotheses.•Across 12 studies, we fail to find the effect.•We discuss the benefits of data sharing and replications.
Avtorica se v članku ukvarja z ameriškimi potniškimi in migracijskimi zakoni iz let 1893, 1903 in 1907, katerih namen je bil upravljanje priseljevanja v ZDA. Prizadevanja za omejevanje priseljevanja ...predstavlja na podlagi kriterijev o koristnosti oziroma potencialni nevarnosti priseljencev. Pokaže, da podatki, navedeni na seznamih ladijskih potnikov, niso bili vedno verodostojni, želja tako izseljencev kot ladijskih družb po vstopu v ZDA je bila namreč močnejša od strahu pred zavrnitvijo.