Software obfuscation has always been a controversially discussed research area. While theoretical results indicate that provably secure obfuscation in general is impossible, its widespread ...application in malware and commercial software shows that it is nevertheless popular in practice. Still, it remains largely unexplored to what extent today's software obfuscations keep up with state-of-the-art code analysis and where we stand in the arms race between software developers and code analysts. The main goal of this survey is to analyze the effectiveness of different classes of software obfuscation against the continuously improving deobfuscation techniques and off-the-shelf code analysis tools. The answer very much depends on the goals of the analyst and the available resources. On the one hand, many forms of lightweight static analysis have difficulties with even basic obfuscation schemes, which explains the unbroken popularity of obfuscation among malware writers. On the other hand, more expensive analysis techniques, in particular when used interactively by a human analyst, can easily defeat many obfuscations. As a result, software obfuscation for the purpose of intellectual property protection remains highly challenging.
Binary rewriting is changing the semantics of a program without having the source code at hand. It is used for diverse purposes, such as emulation (e.g., QEMU), optimization (e.g., DynInst), ...observation (e.g., Valgrind), and hardening (e.g., Control flow integrity enforcement). This survey gives detailed insight into the development and state-of-the-art in binary rewriting by reviewing 67 publications from 1966 to 2018. Starting from these publications, we provide an in-depth investigation of the challenges and respective solutions to accomplish binary rewriting. Based on our findings, we establish a thorough categorization of binary rewriting approaches with respect to their use-case, applied analysis technique, code-transformation method, and code generation techniques. We contribute a comprehensive mapping between binary rewriting tools, applied techniques, and their domain of application. Our findings emphasize that although much work has been done over the past decades, most of the effort was put into improvements aiming at rewriting general purpose applications but ignoring other challenges like altering throughput-oriented programs or software with real-time requirements, which are often used in the emerging field of the Internet of Things. To the best of our knowledge, our survey is the first comprehensive overview on the complete binary rewriting process.
•Update on Tor AS-level adversaries (comparison of measurement results for IPv4 in 2020 and 2022).•Measuring AS-level adversaries for IPv6.•Measuring AS-level adversaries in censored countries (case ...study, based on popular clients and blocked destinations in Russia).
Tor provides anonymity to millions of users around the globe which has made it a valuable target for malicious actors. As a low-latency anonymity system, it is vulnerable to traffic correlation attacks from strong passive adversaries such as large autonomous systems (ASes). In preliminary work Mayer et al.(2020), we have developed a measurement approach utilizing the RIPE Atlas framework – a network of more than 11,000 probes worldwide – to infer the risk of deanonymization for IPv4 clients in Germany and the US.
In this paper, we apply our methodology to additional scenarios providing a broader picture of the potential for deanonymization in the Tor network. In particular, we (a) repeat our earlier (2020) measurements in 2022 to observe changes over time, (b) adopt our approach for IPv6 to analyze the risk of deanonymization when using this next-generation Internet protocol, and (c) investigate the current situation in Russia, where censorship has been intensified after the beginning of Russia’s full-scale invasion of Ukraine. According to our results, Tor provides user anonymity at consistent quality: While individual numbers vary in dependence of client and destination, we were able to identify ASes with the potential to conduct deanonymization attacks. For clients in Germany and the US, the overall picture, however, has not changed since 2020. In addition, the protocols (IPv4 vs. IPv6) do not significantly impact the risk of deanonymization. Russian users are able to securely evade censorship using Tor. Their general risk of deanonymization is, in fact, lower than in the other investigated countries. Beyond, the few ASes with the potential to successfully perform deanonymization are operated by Western companies, further reducing the risk for Russian users.
Nowadays, more and more applications are built with web technologies like HTML, CSS, and JavaScript, which are then executed in browsers. The web is utilized as an operating system independent ...application platform. With this change, authorization models change and no longer depend on operating system accounts and underlying access controls and file permissions. Instead, these accounts are now implemented in the applications themselves, including all of the protective measures and security controls that are required for this. Because of the inherent complexity, flaws in the authorization logic are among the most common security vulnerabilities in web applications. Most applications are built on the concept of the Access-Control List (ACLs), a security model that decides who can access what object. Object Capabilities, transferable rights to perform operations on specific objects, have been proposed as an alternative to ACLs, since they are not susceptible to certain attacks prevalent for ACLs. While their use has been investigated for various domains, like smart contracts, they have not been widely applied for web applications. In this paper, we therefore present a general overview of the capability based authorization model and adapt those approaches for use in web applications. Based on a prototype implementation, we show the possibilities of Object Capabilities to enhance security, but also provide insights on existing pitfalls and problems in porting such models to the web domain.
In this paper, we quantify the effectiveness of third-party tracker blockers on a large scale. First, we analyze the architecture of various state-of-the-art blocking solutions and discuss the ...advantages and disadvantages of each method. Second, we perform a two-part measurement study on the effectiveness of popular tracker-blocking tools. Our analysis quantifies the protection offered against trackers present on more than 100,000 popular websites and 10,000 popular Android applications. We provide novel insights into the ongoing arms race between trackers and developers of blocking tools as well as which tools achieve the best results under what circumstances. Among others, we discover that rule-based browser extensions outperform learning-based ones, trackers with smaller footprints are more successful at avoiding being blocked, and CDNs pose a major threat towards the future of tracker-blocking tools. Overall, the contributions of this paper advance the field of web privacy by providing not only the largest study to date on the effectiveness of tracker-blocking tools, but also by highlighting the most pressing challenges and privacy issues of third-party tracking.
A scanning electron microscope (SEM) usually creates images in the range of megapixel resolutions, but analyzing an IC layer requires resolutions in the gigapixel range. To create such large images, ...many individual images must be taken and then fused into one large image, which poses unique challenges: SEM images are affected by distortion due to charging effects and often exhibit high levels of noise and low contrast. One way of reducing the entry barrier to IC reverse engineering is to develop algorithms that can provide good results even in the case of suboptimal image quality, as can be produced by comparatively older, pre-owned SEMs. The main contribution of this work is the introduction and evaluation of four new algorithms, capable of composing high noise and low contrast SEM images into fused images. While the problem of stitching small images into one fused image is not new, the application of stitching algorithms for noisy IC images poses challenges that have not been addressed in the literature.
AVRS Pucher, Michael; Kudera, Christian; Merzdovnik, Georg
Proceedings of the 15th International Conference on Availability, Reliability and Security,
08/2020
Conference Proceeding
Embedded systems and microcontrollers are becoming more and more popular as the Internet of Things continues to spread. However, while there is a wealth of different methods and tools for analyzing ...software and firmware for architectures that are common to standard hardware, such as x86 or Arm, other systems have not been scrutinized so closely. One of these widely used architectures are AVR 8-bit microcontrollers, which are also used in projects like the Arduino platform. This lack of tools makes it more difficult to analyze such systems and identify potential security vulnerabilities. To get the most out of modern reverse engineering and debugging techniques such as fuzzing or concolic execution, sophisticated and correct emulators are required for dynamic analysis.
The presented work tries to close this gap by introducing AVRS, a lean AVR emulator prototype developed with the goal of reverse engineering. It was implemented to overcome limitations in existing emulators, such as completeness or execution speed, and to provide simple interfaces for interaction with existing program analysis and reverse engineering tools. We provide an analysis of AVRS in relation to existing emulators and show the improvements in speed and completeness. In addition, we have created a setup that leverages AVRS to use fuzz tests to automatically identify errors in AVR firmware. Our results indicate that AVRS is a valuable addition to the arsenal of analysis tools for embedded firmware and can be easily extended to allow the use of existing analysis tools in the domain of AVR microcontrollers.
Tor provides anonymity to millions of users around the globe which has made it a valuable target for malicious actors. As a low-latency anonymity system, it is vulnerable to traffic correlation ...attacks from strong passive adversaries such as large autonomous systems (ASes). In preliminary work, we have developed a measurement approach utilizing the RIPE Atlas framework -- a network of more than 11,000 probes worldwide -- to infer the risk of deanonymization for IPv4 clients in Germany and the US. In this paper, we apply our methodology to additional scenarios providing a broader picture of the potential for deanonymization in the Tor network. In particular, we (a) repeat our earlier (2020) measurements in 2022 to observe changes over time, (b) adopt our approach for IPv6 to analyze the risk of deanonymization when using this next-generation Internet protocol, and (c) investigate the current situation in Russia, where censorship has been intensified after the beginning of Russia's full-scale invasion of Ukraine. According to our results, Tor provides user anonymity at consistent quality: While individual numbers vary in dependence of client and destination, we were able to identify ASes with the potential to conduct deanonymization attacks. For clients in Germany and the US, the overall picture, however, has not changed since 2020. In addition, the protocols (IPv4 vs. IPv6) do not significantly impact the risk of deanonymization. Russian users are able to securely evade censorship using Tor. Their general risk of deanonymization is, in fact, lower than in the other investigated countries. Beyond, the few ASes with the potential to successfully perform deanonymization are operated by Western companies, further reducing the risk for Russian users.
In this paper we show that HSTS headers and long-term cookies (like those used for user tracking) are so prevailing that they allow a malicious Wi-Fi operator to gain significant knowledge about the ...past browsing history of users. We demonstrate how to combine both into a history stealing attack by including specially crafted references into a captive portal or by injecting them into legitimate HTTP traffic. Captive portals are used on many Wi-Fi Internet hotspots to display the user a message, like a login page or an acceptable use policy before they are connected to the Internet. They are typically found in public places such as airports, train stations, or restaurants. Such systems have been known to be troublesome for many reasons. In this paper we show how a malicious operator can not only gain knowledge about the current Internet session, but also about the user's past. By invisibly placing vast amounts of specially crafted references into these portal pages, we can lure the browser into revealing a user's browsing history by either reading stored persistent (long-term) cookies or evaluating responses for previously set HSTS headers. An occurrence of a persistent cookie, as well as a direct call to the pages' HTTPS site is a reliable sign of the user having visited this site earlier. Thus, this technique allows for a site-based history stealing, similar to the famous link-color history attacks. For the Alexa Top 1,000 sites, between 82% and 92% of sites are effected as they use persistent cookies over HTTP. For the Alexa Top 200,000 we determined the number of vulnerable sites between 59% and 86%. We extended our implementation of this attack by other privacy-invading attacks that enrich the collected data with additional personal information.