In the race of arms between attackers, trying to build more and more realistic face replay attacks, and defenders, deploying spoof detection modules with ever-increasing capabilities, CNN-based ...methods have shown outstanding detection performance thus raising the bar for the construction of realistic replay attacks against face-based authentication systems. Rather than trying to rebroadcast even more realistic faces, we show that attackers can successfully fool a face authentication system equipped with a deep learning spoof detection module, by exploiting the vulnerabilities of CNNs to adversarial perturbations. We first show that mounting such an attack is not a trivial task due to the unique features of spoofing detection modules. Then, we propose a method to craft adversarial images that can be successfully exploited to build an effective replay attack. Experiments conducted on the REPLAY-MOBILE database demonstrate that our attacked images achieve good performance against a face recognition system equipped with CNN-based anti-spoofing, in that they are able to pass the face detection, spoof detection and face recognition modules of the authentication chain.
•Unique features of CNN-based spoofing detection impedes adversarial examples.•Adversarial attacks must act in a pre-emptive way in physical domain.•Our attack can pass face detection, spoof detection, face recognition simultaneously.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, SAZU, SBCE, UL, UPCLJ, UPUK
How much instructional assistance to provide to students as they learn, and what kind of assistance to provide, is a much-debated problem in research on learning and instruction. This study presents ...two multi-session classroom experiments in the domain of chemistry, comparing the effectiveness and efficiency of three high-assistance (worked examples, tutored problems, and erroneous examples) and one low-assistance (untutored problem solving) instructional approach, with error feedback consisting of either elaborate worked examples (Experiment 1) or basic correctness feedback (Experiment 2). Neither experiment showed differences in learning outcomes among conditions, but both showed clear efficiency benefits of worked example study: equal levels of test performance were achieved with significantly less investment of time and effort during learning. Interestingly for both theory and practice, the time efficiency benefit was substantial: worked example study required 46–68% less time in Experiment 1 and 48–69% in Experiment 2 than the other instructional approaches.
•We compared high and low assistance instructional materials, i.e., worked examples, erroneous examples, tutored problems, and problems.•In two multi-session classroom experiments, worked examples proved to be the most efficient.•Study time reductions with worked examples were between 46 and 69% compared to the other instructional approaches.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK, ZRSKP
•We provide a detailed review of the evolution of adversarial machine learning over the last ten years.•We start from pioneering work up to more recent work aimed at understanding the security ...properties of deep learning algorithms.•We review work in the context of different applications.•We highlight common misconceptions related to the evaluation of the security of machinelearning and pattern recognition algorithms.•We discuss the main limitations of current work, along with the corresponding future research paths towards designing more secure learning algorithms.
Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK, ZRSKP
University students are often asked to learn abstract concepts. Abstract concepts are hard to learn. Giving specific examples can help learning abstract concepts. These examples might limit ...understanding to the similarities between the abstract domain and particular examples. The primary purpose of this study was to test whether exposure to multiple examples would lead to better learning than exposure to a single example. Secondarily, we were interested in whether there was any particularly effective example. Introductory psychology students were invited to learn about the abstract concept of semiotics, through either 1) three of five distinct examples or 2) a single example presented three times. We assessed learning through definitions, transfer to a novel example, and self-report. The results showed no support for the hypothesis that exposure to multiple examples led to better learning. There was, however, one particular example that was more memorable and resulted in better learning. These results have implications about how best to teach abstract concepts.
We show that adversarial training of supervised learning models is in fact a robust optimization procedure. To do this, we establish a general framework for increasing local stability of supervised ...learning models using robust optimization. The framework is general and broadly applicable to differentiable non-parametric models, e.g., Artificial Neural Networks (ANNs). Using an alternating minimization-maximization procedure, the loss of the model is minimized with respect to perturbed examples that are generated at each parameter update, rather than with respect to the original training data. Our proposed framework generalizes adversarial training, as well as previous approaches for increasing local stability of ANNs. Experimental results reveal that our approach increases the robustness of the network to existing adversarial examples, while making it harder to generate new ones. Furthermore, our algorithm improves the accuracy of the networks also on the original test data.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK, ZRSKP
Remote sensing image analysis technology based on neural networks has significantly facilitated human life. However, adversarial attacks can drastically impair the performance of these models, posing ...substantial economic and security risks. Current adversarial example (AE) detectors primarily focus on studying attacked natural images, while AEs in the remote sensing domain have not received adequate attention. To address this challenge, we propose a novel dual-branch sparse self-learning framework, leveraging instance binding augmentation. The contrastive branch concurrently enhances intra-instance and inter-example feature discrimination, while the masked branch reconstructs perturbation distributions. Furthermore, our method utilizes sparse encoding within depthwise separable convolutions to efficiently transfer parameters, thereby ensuring compatibility with deployment on mobile devices. Extensive experiments demonstrate that our method achieves state-of-the-art performance in detecting both white-box and black-box attacks on remote sensing images. Specifically, our method achieves an average detection accuracy of 95.69%/95.18%, a recall of 91.8%/93.6%, and an F1 score of 93.48%/94.22% on two attacked models across various attack scenarios, outperforming existing methods.
We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as ...the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only “virtually” adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.
Decades of research has shown that example-based learning is an effective instructional strategy for learning new skills. The field of learning from examples is seeing a shift in focus towards more ...innovative and use-inspired research, in part because the use of examples for informal and formal learning purposes has mushroomed. This special issue comprises a set of eight papers in which students learned a procedural skill from worked examples or modeling examples. Each study characterizes a recent development towards more innovative example-based learning research. These developments are: (1) the integration of social-cognitive and cognitive example research, (2) the integration of example-based learning and analogical reasoning research, (3) the extension of traditional Cognitive Load Theory effects, (4) a greater focus on learning from (productive) errors, and (5) more research on individual differences. ....(Orig.).
Full text
Available for:
DOBA, FZAB, GIS, IJS, IZUM, KILJ, NLZOH, NUK, OILJ, PILJ, PNG, SAZU, SBCE, SBMB, UILJ, UKNU, UL, UM, UPUK
Background
In example‐based learning, examples are often combined with generative activities, such as comparative self‐explanations of example cases. Comparisons induce heavy demands on working ...memory, especially in complex domains. Hence, only stronger learners may benefit from comparative self‐explanations. While static text‐based examples can be compared easily, this is challenging for transient video‐based modelling examples used in complex domains because simultaneous processing of two videos is not feasible.
Objectives
To allow for such comparisons, we combined video‐based modelling examples with static representations (i.e., summarizing tables) of the observed optimal and a suboptimal solution of the problem‐solving process. A comparative self‐explanation prompt asked learners to compare the different solution approaches. Our study investigated the impact of video‐based modelling examples versus independent problem‐solving on cognitive load and problem‐solving skill development. Moreover, we investigated the effects of comparative versus sequential self‐explanation prompts, depending on learners' prior knowledge.
Methods
In an experiment, 118 automotive apprentices learned a car malfunction diagnosis strategy. Apprentices were divided into three groups: (1) modelling examples with comparative self‐explanation prompts, (2) modelling examples with sequential prompts, and (3) no examples or prompts. Diagnostic knowledge and skills were assessed before and after the intervention. Cognitive load was measured retrospectively.
Results and conclusions
Despite no observed effects on cognitive load, modelling examples enhanced diagnostic knowledge and diagnostic skills with scaffolds, though not independent diagnostic skills without scaffolds. The need for more practice opportunities to foster independent diagnostic skills is assumed. Additionally, comparative prompts seem promising for learners with higher prior knowledge.
Takeaways
Video‐based modelling examples were more beneficial for learning than practising to apply the diagnostic strategy. Static representations allow for comparisons of video examples and comparative prompts are promising for learners with higher prior knowledge (cf. expertise‐reversal effect). Further research, especially on the effects on cognitive load, is necessary.
Lay Description
What is already known about the topic?
Text and video examples that model how to solve a problem are widely used in education.
Text examples often include self‐explanation prompts that ask learners to compare several examples.
For video examples, such comparison prompts have seldom been investigated, because comparisons are difficult to implement for transient videos.
What does this paper add?
This paper shows that video examples combined with static summaries of processes that have been shown in the video example are effective and allow for comparisons of video examples.
Such comparisons seem to be more promising for learners with higher prior knowledge.
Implications for practice
Practitioners could combine video examples with static summaries of the processes shown in the video examples to allow for comparisons.
Full text
Available for:
BFBNIB, FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
•Generic example-use is productive for proving.•Students have a relatively strong tendency to use examples generically.•Example-use may be more productive when the source of example is external.•The ...representation used in an example makes a difference for proving.
Our work stems from the view that example-based reasoning has the potential of enhancing students’ mathematical thinking, and in particular can be helpful in engaging in proving and learning to prove. We aimed at better understanding the nature of example-use across grade levels, and in particular, how judicious example-use may support students’ ability to reason and prove. The paper builds on individual task-based interviews that were conducted with 12 middle school students, 16 high school students, and 10 undergraduate students, whose majors were mathematics or mathematics related. The tasks called for conjecturing and proving. In our analysis we distinguish between empirical example-use and generic example-use, and examine whether the example-uses that we identified were productive for proving, in terms of developing a proof, a deductive argument, or a sound justification that may lead to a proof. We illustrate these distinctions through ten cases drawn from the data. Our findings indicate a relatively strong tendency of students to use examples generically. They also suggest a strong, though not surprising, connection between treating examples generically and productively. Implications for practice and further research are discussed.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP