•The fatigue properties of a Ni-based alloy 718 manufactured by AM were studied.•A defect in contact with specimen surface has higher influence in terms of areaeff.•The successful application of the ...area parameter model was confirmed.•The guide for fatigue design using statistics of extremes on defects is proposed.•The lower bound of fatigue limit σwl based on areaeffmax can be predicted.
It is well known that high strength metallic materials with Vickers hardness HV > 400 are very sensitive to small defects. This paper discusses fatigue properties of a Ni-based Superalloy 718 with HV = ∼470 manufactured by additive manufacturing (AM). The advantage of AM has been emphasized as the potential application to high strength or hard steels which are difficult to manufacture by traditional machining to complex shapes. However, the disadvantage or challenge of AM has been pointed out due to defects which are inevitably contained in the manufacturing process.
Defects of the material investigated in this study were mostly gas porosity and those made by lack of fusion. The successful application of the area parameter model was confirmed. Although the statistics of extremes analysis is useful for the quality control of AM, the particular surface effect on the effective value of defect size must be carefully considered. Since the orientations of defects in AM materials are random, a defect in contact with specimen surface has higher influence on fatigue strength than an internal defect and has the effective larger size termed as areaeff than the real size, area, of the defect from the viewpoint of fracture mechanics. The guide for the fatigue design and development of higher quality Ni-based Superalloy 718 by AM processing based on the combination of the statistics of extremes on defects and the area parameter model is proposed.
•Fatigue crack growth behavior was studied for application to a fatigue sensor.•The effect of single overload and variable loading was investigated.•The delayed retardation was observed in thin ...copper foil.•A new model was proposed based on a modified Wheeler model.
Smart stress-memory patch (SSMP) which consists of electrodeposited copper foil has been proposed as a sensor to measure fatigue loading for structural health monitoring in previous researches. In this study, the fatigue crack growth behavior of the copper foil was examined under constant amplitude loading with single overload and variable amplitude loading in order to evaluate the applicability of SSMP to the actual loading conditions. Under the overload condition, the fatigue crack growth rate gradually decreased after the overload and reached a minimum value, and then recovered to the baseline. Since the retardation behavior was different from one of thick specimen, a new model to quantify the crack growth in the thin specimen was proposed based on a modified Wheeler model. The proposed model successfully reproduced the fatigue crack growth curve under the single overload condition. Additionally, the proposed model was extended to the variable amplitude loading conditions by including cycle-by-cycle calculation procedure. The results showed that the fatigue crack growth behavior can be represented by the proposed model under the variable amplitude loading condition. Therefore, the feasibility of applying SSMP to actual loading conditions was demonstrated.
It is known that various types of location privacy attacks can be carried out using a personalized transition matrix that is learned for each target user, or a population transition matrix that is ...common to all target users. However, since many users disclose only a small amount of location information in their daily lives, the training data can be extremely sparse. The aim of this paper is to clarify the risk of location privacy attacks in this realistic situation. To achieve this aim, we propose a learning method that uses tensor factorization (or matrix factorization) to accurately estimate personalized transition matrices (or a population transition matrix) from a small amount of training data. To avoid the difficulty in directly factorizing the personalized transition matrices (or population transition matrix), our learning method first factorizes a transition count tensor (or matrix), whose elements are the number of transition counts that the user has made, and then normalizes counts to probabilities. We focus on a localization attack, which derives an actual location of a user at a given time instant from an obfuscated trace, and compare our learning method with the maximum likelihood (ML) estimation method in both the personalized matrix mode and the population matrix mode. The experimental results using four real data sets show that the ML estimation method performs only as well as a random guess in many cases, while our learning method significantly outperforms the ML estimation method in all of the four data sets.
Re-identification attacks based on a Markov chain model have been widely studied to understand how anonymized traces are linked to users. This approach is known to enable users to be re-identified ...with high accuracy when an adversary trains a personalized transition matrix for each target user using a large amount of training data, and when all of the anonymized traces are from the target users. In reality, however, the amount of training data for each target user can be very small, since many users disclose only a small amount of their location information to the public. In addition, many of the anonymized traces are from "non-target" users, whose personalized transition matrices cannot be trained in advance. This paper aims to quantify the risk of re-identification in the realistic situation explained earlier. We first utilize the fact that spatial data can form a group structure, and propose group sparsity tensor factorization to effectively train the personalized transition matrices from a small number of training traces. We second formulate a re-identification attack in an "open" scenario, where many of the anonymized traces are from non-target users. Specifically, we regard this type of attack as a biometric verification (or identification) task, and propose a framework and an algorithm for performing this task using a population transition matrix, which is computed from personalized transition matrices. Our experimental results using three real data sets show that a training method using tensor factorization significantly outperforms the maximum likelihood estimation method, and is further improved by incorporating group sparsity regularization.
Subgraph counting is fundamental for analyzing connection patterns or clustering tendencies in graph data. Recent studies have applied LDP (Local Differential Privacy) to subgraph counting to protect ...user privacy even against a data collector in social networks. However, existing local algorithms suffer from extremely large estimation errors or assume multi-round interaction between users and the data collector, which requires a lot of user effort and synchronization.
In this paper, we focus on a one-round of interaction and propose accurate subgraph counting algorithms by introducing a recently studied shuffle model. We first propose a basic technique called wedge shuffling to send wedge information, the main component of several subgraphs, with small noise. Then we apply our wedge shuffling to counting triangles and 4-cycles -- basic subgraphs for analyzing clustering tendencies -- with several additional techniques. We also show upper bounds on the estimation error for each algorithm. We show through comprehensive experiments that our one-round shuffle algorithms significantly outperform the one-round local algorithms in terms of accuracy and achieve small estimation errors with a reasonable privacy budget, e.g., smaller than 1 in edge DP.
Cancelable biometric schemes have been widely studied to protect templates in biometric authentication over networks. These schemes transform biometric features and perform pattern matching without ...restoring the original features. Although they strongly prevent the leakage of the original features, the response time can be very long in a large-scale biometric identification system. Most of the existing indexing schemes cannot be used to speed up the biometric identification system over networks since a biometric index leaks some information about the original feature. Secure and efficient indexing is a major challenge in large-scale biometric identification over networks. In this paper, we propose a novel indexing scheme that is promising with regard to both security and efficiency. The proposed indexing scheme transforms a permutation-based index, which is the state-of-the-art index in the field of similarity search, and performs a query search without recovering the original index. We also propose a method to artificially generate biometric features necessary to generate an index (which are called "pivots") based on GANs (Generative Adversarial Networks). We prove that the transformed index leaks no information about the original index and the original biometric feature (i.e., perfect secrecy), and comprehensively show that the proposed indexing scheme has the irreversibility, unlinkability, and revocability. We then demonstrate that the proposed indexing scheme significantly outperforms the existing indexing schemes using three real datasets (face, fingerprint, and finger-vein datasets), and is very promising with respect to the accuracy and response time.
The number of IT services that use machine learning (ML) algorithms are continuously and rapidly growing, while many of them are used in practice to make some type of predictions from personal data. ...Not surprisingly, due to this sudden boom in ML, the way personal data are handled in ML systems are starting to raise serious privacy concerns that were previously unconsidered. Recently, Fredrikson et al. USENIX 2014 CCS 2015 proposed a novel attack against ML systems called the model inversion attack that aims to infer sensitive attribute values of a target user. In their work, for the model inversion attack to be successful, the adversary is required to obtain two types of information concerning the target user prior to the attack: the output value (i.e., prediction) of the ML system and all of the non-sensitive values used to learn the output. Therefore, although the attack does raise new privacy concerns, since the adversary is required to know all of the non-sensitive values in advance, it is not completely clear how much risk is incurred by the attack. In particular, even though the users may regard these values as non-sensitive, it may be difficult for the adversary to obtain all of the non-sensitive attribute values prior to the attack, hence making the attack invalid. The goal of this paper is to quantify the risk of model inversion attacks in the case when non-sensitive attributes of a target user are not available to the adversary. To this end, we first propose a general model inversion (GMI) framework, which models the amount of auxiliary information available to the adversary. Our framework captures the model inversion attack of Fredrikson et al. as a special case, while also capturing model inversion attacks that infer sensitive attributes without the knowledge of non-sensitive attributes. For the latter attack, we provide a general methodology on how we can infer sensitive attributes of a target user without knowledge of non-sensitive attributes. At a high level, we use the data poisoning paradigm in a conceptually novel way and inject malicious data into the ML system in order to modify the internal ML model being used into a target ML model; a special type of ML model which allows one to perform model inversion attacks without the knowledge of non-sensitive attributes. Finally, following our general methodology, we cast ML systems that internally use linear regression models into our GMI framework and propose a concrete algorithm for model inversion attacks that does not require knowledge of the non-sensitive attributes. We show the effectiveness of our model inversion attack through experimental evaluation using two real data sets.
The likelihood-ratio based score level fusion (LR fusion) scheme is known as one of the most promising multibiometric fusion schemes. This scheme verifies a user by computing a log-likelihood ratio ...(LLR) for each modality, and comparing the total LLR to a threshold. It can happen in practice that genuine LLRs tend to be less than 0 for some modalities (e.g., the user is a “goat”, who is inherently difficult to recognize, for some modalities; the user suffers from temporary physical conditions such as injuries and illness). The LR fusion scheme can handle such cases by allowing the user to select a subset of modalities at the authentication phase and setting LLRs corresponding to missing query samples to 0. A recent study, however, proposed a modality selection attack, in which an impostor inputs only query samples whose LLRs are greater than 0 (i.e., takes an optimal strategy), and proved that this attack degrades the overall accuracy even if the genuine user also takes this optimal strategy. In this paper, we investigate the impact of the modality selection attack in more details. Specifically, we investigate whether the overall accuracy is improved by eliminating “goat” templates, whose LLRs tend to be less than 0 for genuine users, from the database (i.e., restricting modality selection). As an overall performance measure, we use the KL (Kullback-Leibler) divergence between a genuine score distribution and an impostor's one. We first prove the modality restriction hardly increases the KL divergence when a user can select a subset of modalities (i.e., selective LR fusion). We second prove that the modality restriction increases the KL divergence when a user needs to input all biometric samples (i.e., non-selective LR fusion). We conduct experiments using three real datasets (NIST BSSR1 Set1, Biosecure DS2, and CASIA-Iris-Thousand), and discuss directions of multibiometric fusion systems.
The likelihood-ratio based score level fusion (LR-based fusion) scheme has attracted much attention, since it maximizes accuracy if a log-likelihood ratio (LLR) is accurately estimated. In reality, ...it can happen that a user cannot input some query samples due to temporary physical conditions such as injuries and illness. It can also happen that some modalities tend to cause false rejection (i.e. the user is a “goat” for these modalities). The LR-based fusion scheme can handle these situations by setting LLRs corresponding to missing query samples to 0. In this paper, we refer to such a mode as a “modality selection mode”, and address an issue of accuracy in this mode. Specifically, we provide the following contributions: (1) We firstly propose a “modality selection attack”, in which an impostor inputs only query samples whose LLRs are more than 0 (i.e. takes an optimal strategy) to impersonate others. We also show that the impostor can perform this attack against the SPRT (Sequential Probability Ratio Test)-based fusion scheme, which is an extension of the LR-based fusion scheme to a sequential fusion scenario. (2) We secondly consider the case when both genuine users and impostors take this optimal strategy, and show that the overall accuracy in this case is “worse” than the case when they input all query samples. More specifically, we prove that the KL (Kullback-Leibler) divergence between a genuine distribution of integrated scores and an impostor's one, which can be compared with password entropy, is smaller in the former case. We also show to what extent the KL divergence losses for each modality. (3) We finally evaluate to what extent the overall accuracy becomes worse using the NIST BSSR1 Set 2 and Set 3 datasets, and discuss directions of multibiometric applications based on the experimental results.