The Common Vulnerabilities and Exposures (CVE) system is a widely used standard for identifying and tracking known vulnerabilities in software systems. The severity of these vulnerabilities must be ...determined in order to prioritize mitigation efforts. However, assigning severity to a vulnerability is a challenging task that requires careful analysis of its characteristics and potential impact. Considering the vast number of vulnerabilities identified every year, it is vital to automate the severity assignment, thereby reducing manual effort. This paper proposes a novel approach for predicting the severity of vulnerabilities based on their CVE description using GPT-2, a state-of-the-art language model. The CVSS severity values distribution imbalance is addressed using oversampling and contextual data augmentation techniques. This approach leverages the large-scale language modeling capabilities of GPT-2 to automatically extract relevant features from CVE descriptions and predict the severity level of the vulnerability. The model is evaluated on a test data set of 7,765 CVEs and achieves a high accuracy of 84.2% and an F1 score of 0.82 in predicting the severity of the vulnerabilities on the test data. A comparative analysis of this approach was done against state-of-the-art methods, demonstrating the superior performance of the proposed approach. Based on the results, the proposed approach could be considered a valuable tool for quickly and accurately identifying high-severity vulnerabilities, facilitating more efficient and effective vulnerability management practices. Furthermore, this approach could be extended to other natural language processing tasks related to vulnerability analysis and management.
This paper analyzes security problems of modern computer systems caused by vulnerabilities in their operating systems (OSs). Our scrutiny of widely used enterprise OSs focuses on their ...vulnerabilities by examining the statistical data available on how vulnerabilities in these systems are disclosed and eliminated, and by assessing their criticality. This is done by using statistics from both the National Vulnerabilities Database and the Common Vulnerabilities and Exposures System. The specific technical areas the paper covers are the quantitative assessment of forever-day vulnerabilities, estimation of days-of-grey-risk, the analysis of the vulnerabilities severity and their distributions by attack vector and impact on security properties. In addition, the study aims to explore those vulnerabilities that have been found across a diverse range of OSs. This leads us to analyzing how different intrusion-tolerant architectures deploying the OS diversity impact availability, integrity, and confidentiality.
Deep learning (DL) has been a common thread across several recent techniques for vulnerability detection. The rise of large, publicly available datasets of vulnerabilities has fueled the learning ...process underpinning these techniques. While these datasets help the DL-based vulnerability detectors, they also constrain these detectors' predictive abilities. Vulnerabilities in these datasets have to be represented in a certain way, e.g., code lines, functions, or program slices within which the vulnerabilities exist. We refer to this representation as a base unit. The detectors learn how base units can be vulnerable and then predict whether other base units are vulnerable. We have hypothesized that this focus on individual base units harms the ability of the detectors to properly detect those vulnerabilities that span multiple base units (or MBU vulnerabilities). For vulnerabilities such as these, a correct detection occurs when all comprising base units are detected as vulnerable. Verifying how existing techniques perform in detecting all parts of a vulnerability is important to establish their effectiveness for other downstream tasks. To evaluate our hypothesis, we conducted a study focusing on three prominent DL-based detectors: ReVeal, DeepWukong, and LineVul. Our study shows that all three detectors contain MBU vulnerabilities in their respective datasets. Further, we observed significant accuracy drops when detecting these types of vulnerabilities. We present our study and a framework that can be used to help DL-based detectors toward the proper inclusion of MBU vulnerabilities.
There is a debate at the highest levels of government and civil society over whether the rise in fraud is a blip exacerbated by the pandemic or a more fundamental transformation (or flip) in the ...structure of crime. This paper examines this debate by exploring the social factors influencing the level of fraud. It does so by conceptualising these factors as a fraud field, offering a novel way to visualise and consider vulnerability through the broad categories of ‘threats’ and ‘safeguards’ which influence levels of fraud. In doing so it offers the first attempt to map the wide range of ‘forces’ that influence levels of fraud and produces a mathematical expression of this, which will provide a basis for further debate, refinement and research. The threats include the myriad of opportunities, the large population of fraudsters, and the range of human and technology enablers that support the fraudsters. The principal safeguards that resist the fraud threat are culture, the law and the defensive resilience of individuals and organisations. Using the fraud field and official crime statistics, the paper argues that the safeguards are so structurally weak that the fraud epidemic is not a blip, but an opportunistic flip from traditional acquisitive crimes. The forces influencing levels of fraud mean high volumes of fraud will continue at the current levels or even accelerate further, unless there is a collective national strategy to strengthen the safeguards.
Blockchain has a range of built-in features, such as distributed ledger, decentralized storage, authentication, security, and immutability, and has moved beyond hype to practical applications in ...industry sectors such as Healthcare. Blockchain applications in the healthcare sector generally require more stringent authentication, interoperability, and record sharing requirements, due to exacting legal requirements, such as Health Insurance Portability and Accountability Act of 1996 (HIPAA). Building on existing blockchain technologies, researchers in both academia and industry have started to explore applications that are geared toward healthcare use. These applications include smart contracts, fraud detection, and identity verification. Even with these improvements, there are still concerns as blockchain technology has its own specific vulnerabilities and issues that need to be addressed, such as mining incentives, mining attacks, and key management. Additionally, many of the healthcare applications have unique requirements that are not addressed by many of the blockchain experiments being explored, as highlighted in this survey paper. A number of potential research opportunities are also discussed in this paper.
The COVID-19 pandemic has had wide-ranging implications on the academic community and there have been numerous commentaries on the effects of the pandemic on qualitative health research. However, the ...vulnerabilities faced by participants and researchers during the pandemic have remained underexplored. Addressing this gap, this reflective article discusses the intersecting challenges and opportunities arising for participants and researchers in qualitative health research during the pandemic through the lens of layered vulnerability. Vulnerability, as a layered concept, provides novel insight to discussions on the effects of the pandemic as it provides a depth of insight into the multifaceted and dynamic nature of vulnerabilities, while considering individual differences and contexts. Reflecting on the research we conducted during the pandemic, we draw out the layers of vulnerability that both participants and researchers faced during the research process, as well as the obligations and strategies we developed to mitigate these vulnerabilities. We discuss the intersectionality of individual characteristics and the digitisation of work and life, including the impact of moving qualitative health research online and the use of creative methodological approaches. Our article highlights how, through engaging with their own vulnerabilities throughout the research process, researchers can develop creative and new solutions for qualitative research which mitigate the increased vulnerabilities participants faced during the pandemic.
•Layered vulnerabilities enable reflection on research challenges and opportunities.•The emotional impact of the pandemic is a key vulnerability.•The digitisation of work and life is a key vulnerability.•Researchers can mitigate increased participant vulnerabilities.•In this light it is crucial to see ethics as an iterative and critical process.
Extant research has focused on the detection of fake reviews on online review platforms, motivated by the well-documented impact of customer reviews on the users’ purchase decisions. The problem is ...typically approached from the perspective of protecting the credibility of review platforms, as well as the reputation and revenue of the reviewed firms. However, there is little examination of the vulnerability of individual businesses to fake review attacks. This study focuses on formalizing the visibility of a business to the customer base and on evaluating its vulnerability to fake review attacks. We operationalize visibility as a function of the features that a business can cover and its position in the platform’s review-based ranking. Using data from over 2.3 million reviews of 4,709 hotels from 17 cities, we study how visibility can be impacted by different attack strategies. We find that even limited injections of fake reviews can have a significant effect and explore the factors that contribute to this vulnerable state. Specifically, we find that, in certain markets, 50 fake reviews are sufficient for an attacker to surpass any of its competitors in terms of visibility. We also compare the strategy of self-injecting positive reviews with that of injecting competitors with negative reviews and find that each approach can be as much as 40% more effective than the other across different settings. We empirically explore response strategies for an attacked hotel, ranging from the enhancement of its own features to detecting and disputing fake reviews. In general, our measure of visibility and our modeling approach regarding attack and response strategies shed light on how businesses that are targeted by fake reviews can detect and tackle such attacks.
The delegation of management of aquatic environments and flood prevention to public inter-municipal cooperation establishments to January 1, 2018 resulted in a deep recomposition of the system of ...actors in charge of the dikes, and a questioning of coastal dike management policies. On the Normandy coast, a series of semi-structured interviews with institutional actors, associations and users was conducted between 2018 and 2021. This investigation highlights, on the one hand, a multiplicity of uncoordinated former managers and, on the other hand, a relative abandonment of several protective dikes that calls into question their very future. In view of the uncertainties linked to the transfer of competencies, the vulnerability of these mostly agricultural territories, the concerns of all actors towards the risk of submersion and salinization, several contrasting orientations are proposed by the institutional actors or by former managers. These proposals range from hard defense (strengthening of dikes) to softer management (partial depolderization, retreat of dikes).