•Reviews more than 45 recent solutions papers, and more than 40 different privacy-preserving deep learning techniques.•Proposes a multi-level taxonomy that classifies the privacy-preserving deep ...learning techniques.•Summarizes evaluation results of the reviewed solutions with respect to performance metrics.•Discusses and outline a number of learned lessons of each privacy-preserving task.•Presents solutions comparison, highlights open research challenges and provides some recommendations for future research.
Deep learning is one of the advanced approaches of machine learning, and has attracted a growing attention in the recent years. It is used nowadays in different domains and applications such as pattern recognition, medical prediction, and speech recognition. Differently from traditional learning algorithms, deep learning can overcome the dependency on hand-designed features. Deep learning experience is particularly improved by leveraging powerful infrastructures such as clouds and adopting collaborative learning for model training. However, this comes at the expense of privacy, especially when sensitive data are processed during the training and the prediction phases, as well as when training model is shared. In this paper, we provide a review of the existing privacy-preserving deep learning techniques, and propose a novel multi-level taxonomy, which categorizes the current state-of-the-art privacy-preserving deep learning techniques on the basis of privacy-preserving tasks at the top level, and key technological concepts at the base level. This survey further summarizes evaluation results of the reviewed solutions with respect to defined performance metrics. In addition, it derives a set of learned lessons from each privacy-preserving task. Finally, it highlights open research challenges and provides some recommendations as future research directions.
The construction and effects on national boundaries have become central topics in public and academic debates on digital sovereignty. Both state and non-state actors increasingly consider ...jurisdictions and traditional governing structures as means to capture and regulate digital data flows. This article delves into the intricate phenomenon of ‘data localization’, conceptualizing it as a socio-technical assemblage reflecting the evolving expectations surrounding Internet architecture and national boundaries. Interviewing the users of Threema – a Swiss secure messaging app – this study unravels data localization practices as a hybrid black box, intertwining technical changes, political discourses, socio-technical imaginaries, and shifting social norms. Drawing on the field of Science and Technology Studies, we mobilize the analytical tools of controversy and discourse to highlight data localization as a locus of political contestation in Switzerland, where imaginaries of national boundaries are often mobilized to symbolize security and reliability. The article provides three key contributions to the discourse on digital sovereignty, fragmentation, and governance. Firstly, it argues for the usefulness of Science and Technology Studies in understanding Internet governance, emphasizing the need for analyses grounded in specific socio-technical contexts. Secondly, it advocates for a social perspective on digital sovereignty, emphasizing user agency, social movements, and collective action as crucial factors shaping the governance of data flows. Lastly, the article sheds light on users resorting to state jurisdictions as a means to reinforce control over data flows, exploring the discursive mobilization of national boundaries in the digital public sphere.
Although much research examines the factors that affect technology adoption and use, less is known about how older adults as a group differ in their ability to use the Internet. The theory of digital ...inequality suggests that even once people have gone online, differences among them will persist in important ways such as their online skills. We analyze survey data about older American adults’ Internet skills to examine whether skills differ in this group and if they do, what explains differential online abilities. We find that there is considerable variation in Internet know-how and this relates to both socioeconomic status and autonomy of use. The results suggest that attempts to achieve a knowledgeable older adult population regarding Internet use must take into account these users’ socioeconomic background and available access points.
To investigate the dynamics of online persuasion, this research uses the Elaboration Likelihood Model (ELM) to determine the effects of argument quality as a central route to influence attitude ...change versus design and social elements as peripheral routes to attitude change. Additional to this research is an examination of change in issue involvement as a mediator between central and peripheral routes leading to attitude change. Findings from a study involving 403 participants add to our understanding of ELM concerning the role of website design and how an individual’s level of issue involvement is a prerequisite to changing user attitudes.
MOOCs (Massive Open Online Courses) have usually high dropout rates. Many articles have proposed predictive models in order to early detect learners at risk to alleviate this issue. Nevertheless, ...existing models do not consider complex high-level variables, such as self-regulated learning (SRL) strategies, which can have an important effect on learners' success. In addition, predictions are often carried out in instructor-paced MOOCs, where contents are released gradually, but not in self-paced MOOCs, where all materials are available from the beginning and users can enroll at any time. For self-paced MOOCs, existing predictive models are limited in the way they deal with the flexibility offered by the course start date, which is learner dependent. Therefore, they need to be adapted so as to predict with little information short after each learner starts engaging with the MOOC. To solve these issues, this paper contributes with the study of how SRL strategies could be included in predictive models for self-paced MOOCs. Particularly, self-reported and event-based SRL strategies are evaluated and compared to measure their effect for dropout prediction. Also, the paper contributes with a new methodology to analyze self-paced MOOCs when carrying out a temporal analysis to discover how early prediction models can serve to detect learners at risk. Results of this article show that event-based SRL strategies show a very high predictive power, although variables related to learners' interactions with exercises are still the best predictors. That is, event-based SRL strategies can be useful to predict if e.g., variables related to learners' interactions with exercises are not available. Furthermore, results show that this methodology serves to achieve early powerful predictions from about 25 to 33% of the theoretical course duration. The proposed methodology presents a new approach to predict dropouts in self-paced MOOCs, considering complex variables that go beyond the classic trace-data directly captured by the MOOC platforms.
•Event-based self-regulated learning (SRL) strategies are good predictors of dropout.•Self-reported SRL strategies are useless to predict dropout.•A new approach is proposed for prediction in self-paced MOOCs.•It is possible to achieve powerful predictions from 25 to 33% of the course duration.
With the ever-increasing development of technology and its integration into users’ private and professional life, a decision regarding its acceptance or rejection still remains an open question. A ...respectable amount of work dealing with the technology acceptance model (TAM), from its first appearance more than a quarter of a century ago, clearly indicates a popularity of the model in the field of technology acceptance. Originated in the psychological theory of reasoned action and theory of planned behavior, TAM has evolved to become a key model in understanding predictors of human behavior toward potential acceptance or rejection of the technology. The main aim of the paper is to provide an up-to-date, well-researched resource of past and current references to TAM-related literature and to identify possible directions for future TAM research. The paper presents a comprehensive concept-centric literature review of the TAM, from 1986 onwards. According to a designed methodology, 85 scientific publications have been selected and classified according to their aim and content into three categories such as (i) TAM literature reviews, (ii) development and extension of TAM, and (iii) modification and application of TAM. Despite a continuous progress in revealing new factors with significant influence on TAM’s core variables, there are still many unexplored areas of model potential application that could contribute to its predictive validity. Consequently, four possible future directions for TAM research based on the conducted literature review and analysis are identified and presented.
Display omitted
•How 2.3 Facebook users consumed different information.•Qualitatively different information is consumed in a similar way.•Users more prone to interact with false claims are usually ...exposed to conspiracy rumors.
In this work we study, on a sample of 2.3million individuals, how Facebook users consumed different information at the edge of political discussion and news during the last Italian electoral competition. Pages are categorized, according to their topics and the communities of interests they pertain to, in (a) alternative information sources (diffusing topics that are neglected by science and main stream media); (b) online political activism; and (c) main stream media. We show that attention patterns are similar despite the different qualitative nature of the information, meaning that unsubstantiated claims (mainly conspiracy theories) reverberate for as long as other information. Finally, we classify users according to their interaction patterns among the different topics and measure how they responded to the injection of 2788 false information. Our analysis reveals that users which are prominently interacting with conspiracists information sources are more prone to interact with intentional false claims.
Today, deep convolutional neural networks (CNNs) have demonstrated state of the art performance for supervised medical image segmentation, across various imaging modalities and tasks. Despite early ...success, segmentation networks may still generate anatomically aberrant segmentations, with holes or inaccuracies near the object boundaries. To mitigate this effect, recent research works have focused on incorporating spatial information or prior knowledge to enforce anatomically plausible segmentation. If the integration of prior knowledge in image segmentation is not a new topic in classical optimization approaches, it is today an increasing trend in CNN based image segmentation, as shown by the growing literature on the topic. In this survey, we focus on high level prior, embedded at the loss function level. We categorize the articles according to the nature of the prior: the object shape, size, topology, and the inter-regions constraints. We highlight strengths and limitations of current approaches, discuss the challenge related to the design and the integration of prior-based losses, and the optimization strategies, and draw future research directions.
•Review of methods that incorporate prior knowledge in deep learning loss function for medical image segmentation•Understanding the mechanisms behind the design and implementation of prior based losses.•Categorization of prior-based losses according to the nature of the prior constraints.•Overview on:•The types of priors existing in the literature and how they are modeled.•The major challenges linked to the design of such prior-based losses.•Their common training and optimization strategies.
The dossier examines the reflexivity of contemporary digital creations which, far from glorifying the forms and devices on which they are built, are often intended, on the contrary, to denounce the ...limits and risks of the emerging technologies on which they are based. The ten articles collected here highlight the critical function of digital works (series, video games, social networks) in relation to technological developments in the contemporary world.
Le dossier interroge la réflexivité des créations numériques contemporaines qui, loin de glorifier les formes et les procédés sur lesquels elles sont construites, sont souvent bâties, au contraire, pour dénoncer leslimites et les risques des technologies émergentes sur lesquelles elles se fondent. Les dix contributions réunies mettent en évidence la fonction critique des œuvres numériques (séries, jeux vidéo, réseaux sociaux) à l’égard des évolutions technologiques du monde contemporain
The Internet of Things (IoT) is an ecosystem that integrates physical objects, software and hardware to interact with each other. Aging of population, shortage of healthcare resources, and rising ...medical costs make IoT-based technologies necessary to be tailored to address these challenges in healthcare. This systematic literature review has been conducted to determine the main application area of IoT in healthcare, components of IoT architecture in healthcare, most important technologies in IoT, characteristics of cloud-based architecture, security and interoperability issues in IoT architecture and effects, and challenges of IoT in healthcare. Sixty relevant papers, published between 2000 and 2016, were reviewed and analyzed. This analysis revealed that home healthcare service was one of the main application areas of IoT in healthcare. Cloud-based architecture, by providing great flexibility and scalability, has been deployed in most of the reviewed studies. Communication technologies including wireless fidelity (Wi-Fi), Bluetooth, radio-frequency identification (RFID), ZigBee, and Low-Power Wireless Personal Area Networks (LoWPAN) were frequently used in different IoT models. The studies regarding the security and interoperability issues in IoT architecture in health are still low in number. With respect to the most important effects of IoT in healthcare, these included ability of information exchange, decreasing stay of hospitalization and healthcare costs. The main challenges of IoT in healthcare were security and privacy issues.