Concealing malicious components within widely used USB peripherals has become a popular attack vector utilizing social engineering techniques and exploiting users’ trust in USB devices. This vector ...enables the attacker to easily penetrate an organization's computers even when the target is secured or in an air-gapped network. Such malicious concealment can be done as part of a supply chain attack or during the device manufacturing process. In cases where the device allows the user to update its firmware, a supply chain attack may involve changing just the device's firmware, thus compromising the device without the need for concealment. A compromised device can impersonate other devices like keyboards in order to send malicious keystrokes to the computer. However, the keystrokes generated maliciously do not match human keystroke characteristics, and therefore they can be easily detected by security tools that are designed to continuously verify the user's identity based on his/her keystroke dynamics. In this paper, we present Malboard, a sophisticated attack based on designated hardware concealment, which automatically generates keystrokes that have the attacked user's behavioral characteristics; in this attack these keystrokes are injected into the computer in the form of malicious commands and thus can evade existing detection mechanisms designed to continuously verify the user's identity based on keystroke dynamics. We implemented this novel attack and evaluated its performance on 30 subjects performing three different keystroke tasks; we evaluated the attack against three existing detection mechanisms, and the results show that our attack managed to evade detection in 83–100% of the cases, depending on the detection tools in place. Malboard was proven to be effective in two scenarios: either by a remote attacker using wireless communication to communicate with Malboard or by an inside attacker (malicious employee) that physically operates and uses Malboard. In addition, in order to address the evasion gap, we developed three different modules aimed at detecting keystroke injection attacks in general, and particularly, the more sophisticated Malboard attack. Our proposed detection modules are trusted and secured, because they are based on three side-channel resources which originate from the interaction between the keyboard, user, and attacked host. These side-channel resources include (1) the keyboard's power consumption, (2) the keystrokes’ sound, and (3) the user's behavior associated with his/her ability to respond to displayed textual typographical errors. Our results showed that each of the proposed detection modules is capable of detecting the Malboard attack in 100% of the cases, with no misses and no false positives; using them together as an ensemble detection framework will assure that an organization is immune to the Malboard attack in particular and other keystroke injection attacks in general.
Network intrusion attacks are a known threat. To detect such attacks, network intrusion detection systems (NIDSs) have been developed and deployed. These systems apply machine learning models to ...high-dimensional vectors of features extracted from network traffic to detect intrusions. Advances in NIDSs have made it challenging for attackers, who must execute attacks without being detected by these systems. Prior research on bypassing NIDSs has mainly focused on perturbing the features extracted from the attack traffic to fool the detection system, however, this may jeopardize the attack's functionality. In this work, we present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack that can bypass a variety of NIDSs. Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets. The trained LSTM is used to set the time differences between the malicious traffic packets (attack), without changing their content, such that they will "behave" like benign network traffic and will not be detected as an intrusion. We evaluate TANTRA on eight common intrusion attacks and three state-of-the-art NIDS systems, achieving an average success rate of 99.99% in network intrusion detection system evasion. We also propose a novel mitigation technique to address this new evasion attack.
When neural networks are employed for high-stakes decision-making, it is desirable that they provide explanations for their prediction in order for us to understand the features that have contributed ...to the decision. At the same time, it is important to flag potential outliers for in-depth verification by domain experts. In this work we propose to unify two differing aspects of explainability with outlier detection. We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction and at the same time identify regions of similarity between the predicted sample and the examples. The examples are real prototypical cases sampled from the training set via a novel iterative prototype replacement algorithm. Furthermore, we propose to use the prototype similarity scores for identifying outliers. We compare performance in terms of the classification, explanation quality and outlier detection of our proposed network with baselines. We show that our prototype-based networks extending beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
In this article, we provide an implementation, evaluation, and analysis of PowerHammer - an attack that uses power lines to exfiltrate data from air-gapped computers. A malicious code running on a ...compromised computer intentionally controls the utilization of the CPU cores. The CPU utilization is electromagnetically conducted and propagated through the power lines in the form of a parasitic signal that is modulated, encoded, and transmitted on top of the current flow fluctuations. This electromagnetic phenomenon is known as `conducted emission'. In this attack, the attacker taps the indoor electrical power wiring that is connected to the electrical outlet of the compromised computer. The conducted electromagnetic emission of the compromised computer is analyzed and the exfiltrated data is decoded. The proposed attack is then experimentally evaluated and characterized. The communication performance is discussed and a set of defensive countermeasures is presented. A crucial aspect of the proposed covert communication scheme is that it fully conforms to civilian and military conductive emission standards.
A new context-based model (CoBAn) for accidental and intentional data leakage prevention (DLP) is proposed. Existing methods attempt to prevent data leakage by either looking for specific keywords ...and phrases or by using various statistical methods. Keyword-based methods are not sufficiently accurate since they ignore the context of the keyword, while statistical methods ignore the content of the analyzed text. The context-based approach we propose leverages the advantages of both these approaches. The new model consists of two phases: training and detection. During the training phase, clusters of documents are generated and a graph representation of the confidential content of each cluster is created. This representation consists of key terms and the context in which they need to appear in order to be considered confidential. During the detection phase, each tested document is assigned to several clusters and its contents are then matched to each cluster’s respective graph in an attempt to determine the confidentiality of the document. Extensive experiments have shown that the model is superior to other methods in detecting leakage attempts, where the confidential information is rephrased or is different from the original examples provided in the learning set.
Recent work on adversarial learning has mainly focused on neural networks and domains in which those networks excel, such as computer vision and audio processing. Typically, the data in those domains ...is homogeneous, whereas domains with heterogeneous tabular datasets remain underexplored, despite their prevalence. When searching for adversarial patterns within heterogeneous input spaces, an attacker must simultaneously preserve the complex domain-specific validity rules of the data and the adversarial nature of the identified samples. As such, applying adversarial manipulations to heterogeneous datasets has proven challenging, and a generic attack method has not yet been proposed. However, this study argue that machine learning models trained on heterogeneous tabular data are as susceptible to adversarial manipulations as those trained on continuous or homogeneous data, such as images. To support this claim, a generic optimization framework for identifying adversarial perturbations in heterogeneous input spaces is introduced. The framework defines distribution-aware constraints to preserve the consistency of the adversarial examples and then incorporate them by embedding the heterogeneous input into a continuous latent space. Due to the nature of the underlying datasets, we focus on ℓ0 perturbations and demonstrate their applicability in real life. The effectiveness of the suggested approach is demonstrated using three datasets from different content domains. The results show that despite the constraints imposed on the input validity in heterogeneous datasets, machine learning models trained using such data are still susceptible to adversarial examples.
Display omitted
•Attacks on tabular data ignore complex features (nominal) and feature correlations.•Mathematically define a valid real-world heterogeneous adversarial example.•Use embedding function to preserve feature correlations and value consistency.•Implement and evaluate the framework in three data domains and learning models.
Initial penetration is one of the first steps of an Advanced Persistent Threat (APT) attack, and it is considered one of the most significant means of initiating cyber-attacks aimed at organizations. ...Such an attack usually results in the loss of sensitive and confidential information. Because email communication is an integral part of daily business operations, APT attackers frequently leverage email as an attack vector for initial penetration of the targeted organization. Emails allow the attacker to deliver malicious attachments or links to malicious websites. Attackers usually use social engineering in order to make the recipient open the malicious email, open the attachment, or press a link. Existing defensive solutions within organizations prevent executables from entering organizational networks via emails, therefore, recent APT attacks tend to attach non-executable files (PDF, MS Office etc.) which are widely used in organizations and mistakenly considered less suspicious or malicious. This article surveys existing academic methods for the detection of malicious PDF files. The article outlines an Active Learning framework and highlights the correlation between structural incompatibility of PDF files and their likelihood of maliciousness. Finally, we provide comparisons, insights and conclusions, as well as avenues for future research in order to enhance the detection of malicious PDFs.
As the number of drones increases and the era in which they begin to fill the skies approaches, an important question needs to be answered: From a security and privacy perspective, are society and ...drones really prepared to handle the challenges that a large volume of flights will create? In this paper, we investigate security and privacy in the age of commercial drones. First, we focus on the research question: Are drones and their ecosystems protected against attacks performed by malicious entities? We list a drone's targets, present a methodology for reviewing attack and countermeasure methods, perform a comprehensive review, analyze scientific gaps, present conclusions, and discuss future research directions. Then, we focus on the research question: Is society protected against attacks conducted using drones? We list targets within society, profile the adversaries, review threats, present a methodology for reviewing countermeasures, perform a comprehensive review, analyze scientific gaps, present conclusions, and discuss future research directions. Finally, we focus on the primary research question: From the security and privacy perspective, are society and drones prepared to take their relationship one step further? Our analysis reveals that the technological means required to protect drones and society from one another has not yet been developed, and there is a tradeoff between the security and privacy of drones and that of society. That is, the level of security and privacy cannot be optimized concurrently for both entities, because the security and privacy of drones cannot be optimized without decreasing the security and privacy of society, and vice versa.
Optical Speech Recovery From Desktop Speakers Nassi, Ben; Pirutin, Yaron; Shams, Jacob ...
Computer (Long Beach, Calif.),
2022-Nov., 2022-11-00, Volume:
55, Issue:
11
Journal Article
Peer reviewed
In this article, we show that desktop speakers’ internal (electrical circuitry) and external (reflective diaphragm) design may expose users to confidential information leakage. We demonstrate that ...these flaws are present in billions of devices produced by global manufacturers and discuss countermeasures.
Physical adversarial attacks against object detectors have seen increasing success in recent years. However, these attacks require direct access to the object of interest in order to apply a physical ...patch. Furthermore, to hide multiple objects, an adversarial patch must be applied to each object. In this paper, we propose a contactless translucent physical patch containing a carefully constructed pattern, which is placed on the camera's lens, to fool state-of-the-art object detectors. The primary goal of our patch is to hide all instances of a selected target class. In addition, the optimization method used to construct the patch aims to ensure that the detection of other (untargeted) classes remains unharmed. Therefore, in our experiments, which are conducted on state-of-the-art object detection models used in autonomous driving, we study the effect of the patch on the detection of both the selected target class and the other classes. We show that our patch was able to prevent the detection of 42.27% of all stop sign instances while maintaining high (nearly 80%) detection of the other classes.