Today's power systems are seeing a paradigm shift under the energy transition, sparkled by the electrification of demand, digitalisation of systems, and an increasing share of decarbonated power ...generation. Most of these changes have a direct impact on their control centers, forcing them to handle weather-based energy resources, new interconnections with neighbouring transmission networks, more markets, active distribution networks, micro-grids, and greater amounts of available data. Unfortunately, these changes have translated during the past decade to small, incremental changes, mostly centered on hardware, software, and human factors. We assert that more transformative changes are needed, especially regarding humancentered design approaches, to enable control room operators to manage the future power system. This paper discusses the evolution of operators towards continuous operation planners, monitoring complex time horizons thanks to adequate real-time automation. Reviewing upcoming challenges as well as emerging technologies for power systems, we present our vision of a new evolutionary architecture for control centers, both at backend and frontend levels. We propose a unified hypervision scheme based on structured decision-making concepts, providing operators with proactive, collaborative, and effective decision support.
Facial attractiveness is an important biological and social signal on social interaction. Recent research has demonstrated that an attractive face captures greater spatial attention than an ...unattractive face does. Little is known, however, about the temporal characteristics of visual attention for facial attractiveness. In this study, we investigated the temporal modulation of visual attention induced by facial attractiveness by using a rapid serial visual presentation. Fourteen male faces and two female faces were successively presented for 160 ms, respectively, and participants were asked to identify two female faces embedded among a series of multiple male distractor faces. Identification of a second female target (T2) was impaired when a first target (T1) was attractive compared to neutral or unattractive faces, at 320 ms stimulus onset asynchrony (SOA); identification was improved when T1 was attractive compared to unattractive faces at 640 ms SOA. These findings suggest that the spontaneous appraisal of facial attractiveness modulates temporal attention.
Malgré les avantages économiques de l’informatique en nuage (ou cloud computing) pour les entreprises et ses multiples applications envisagées, il subsiste encore des obstacles pour son adoption à ...grande échelle. La sécurité des données sauvegardées et traitées dans le nuage arrive en tête des préoccupations des décideurs des directions des systèmes d'information. De ce fait, l'objectif principal de nos travaux de recherche lors de cette thèse de doctorat est de poser des bases solides pour une utilisation sûre et sécurisée du nuage. Dans un premier lieu, l’externalisation des processus métiers vers le nuage permet aux entreprises de réduire les couts d’investissement et de maitriser les couts d’exploitation de leurs systèmes d’information ; Elle permet aussi de promouvoir la réutilisation des parties (ou fragments) de ses processus métiers en tant que service cloud, éventuellement par des concurrents directs, afin de faciliter le développement de nouvelles applications orientés services ‘SOA’, ainsi la collaboration à l’échelle du nuage. Néanmoins, le fait de révéler la provenance d’un fragment réutilisé est considérée comme une brèche dans la vie privée et risque d’être dommageable pour l’entreprise propriétaire de ce fragment. Les techniques d’anonymisation des données ont fait leurs preuves dans le domaine des bases de données. Notre principale contribution dans cette partie est la proposition d’un protocole basée sur l’anonymisation des fragments de processus métiers afin de garantir à la fois, la vie privée de leurs propriétaires et la disponibilité de ces fragments pouvant être réutilisés dans le nuage. Les systèmes d’authentification biométriques permettent une authentification des individus avec une garantit suffisante. Néanmoins, le besoin en ressources informatiques ‘calcul et stockage’ de ces systèmes et le manque de compétences au sein des organismes freinent considérablement leurs utilisations à grande échelle. Le nuage offre la possibilité d’externaliser à la fois le calcul et le stockage des données biométriques à moindre cout et de proposer une authentification biométrique en tant que service. Aussi, l’élasticité du nuage permet de répondre aux pics des demandes d’authentifications aux heures de pointes. Cependant, des problèmes de sécurité et de confidentialité des données biométriques sensibles se posent, et par conséquent doivent être traité afin de convaincre les institutions et organismes à utiliser des fragments externes d'authentification biométriques dans leurs processus métiers. Notre principale contribution dans cette partie est un protocole léger ‘coté client’ pour une externalisation (sur un server distant) de la comparaison des données biométriques sans révéler des informations qui faciliteraient une usurpation d’identité par des adversaires. Le protocole utilise une cryptographie légère basée sur des algorithmes de hachage et la méthode de 'groupe de tests combinatoires', permettant une comparaison approximative entre deux données biométriques. Dans la dernière partie, nous avons proposé un protocole sécurisé permettant la mutualisation d’un Hyperviseur (Outil permettant la corrélation et la gestion des événements issus du SI) hébergé dans le nuage entre plusieurs utilisateurs. La solution proposée utilise à la fois, le chiffrement homomorphique et la réécriture de règles de corrélation afin de garantir la confidentialité les évènements provenant des SI des différents utilisateurs. Cette thèse a été réalisée à l'Université Paris Descartes (groupe de recherche diNo du LIPADE) avec le soutien de la société SOMONE et l'ANRT dans le cadre d'une convention CIFRE.
Cloud computing has become one of the fastest growing segments of the IT industry. In such open distributed computing environments, security is of paramount concern. This thesis aims at developing protocols and techniques for private and reliable outsourcing of design and compute-intensive tasks on cloud computing infrastructures. The thesis enables clients with limited processing capabilities to use the dynamic, cost-effective and powerful cloud computing resources, while having guarantees that their confidential data and services, and the results of their computations, will not be compromised by untrusted cloud service providers. The thesis contributes to the general area of cloud computing security by working in three directions. First, the design by selection is a new capability that permits the design of business processes by reusing some fragments in the cloud. For this purpose, we propose an anonymization-based protocol to secure the design of business processes by hiding the provenance of reused fragments. Second, we study two di_erent cases of fragments' sharing : biometric authentication and complex event processing. For this purpose, we propose techniques where the client would only do work which is linear in the size of its inputs, and the cloud bears all of the super-linear computational burden. Moreover, the cloud computational burden would have the same time complexity as the best known solution to the problem being outsourced. This prevents achieving secure outsourcing by placing a huge additional overhead on the cloud servers. This thesis has been carried out in Université Paris Descartes (LIPADE - diNo research group) and in collaboration with SOMONE under a Cifre contract. The convergence of the research fields of those teams led to the development of this manuscrit.
This paper presents the challenges of implementing a bare-metal hypervisor without using hardware virtualization features. This choice is dictated by two reasons: (i) some processor do not include ...virtualization instructions, (ii) in the context formal verification, the proof relies on good behavior of the hardware. Thus eliminating hardware features will let us have a more precise proof. Implementing virtualization features in hardware is a complex work: the instruction set remains large, and despite of the documentation, some behaviors are not obvious, if not undefined. Moreover, doing this in software forces us to freeze the guest to perform work, decreasing performances. We implemented a software hypervisor that has the particularity to run the guest systems in privilege mode. Before that, the hypervisor dynamically analyze the guest code and runs it after setting breakpoints on sensitive instructions. To perform the analysis, we extracted the whole ARM and Thumb instruction set to identify sensitives instructions, which has to be handled by the hypervisor. In order to preserve acceptable performances, we only track code running on privileged mode. Thus, guest kernel run at the same level of privileges as the hypervisor. We evaluated the performances of our approach using micro-benchmarks and macro-benchmarks to evaluate the impact of the process on a piece of code and on a whole system. The results show that, when running a guest that performs pre-emptive scheduling and running its tasks in user mode, our hypervisor performs with a reasonable overhead: from 0.3 % to 15 % overhead on several synthetic benchmarks. We finally provide several ideas for further optimization and a direction for future work.
Practical applications of image processing need some special devices and instruments in general, but, sometimes we would like to execute easily image processing for various purposes-for example, ...diagnosis of growth states of crops and/or stocks by individual farmers- with suitable speed and low cost, and to know the processed results immediately. In present study, we attempt to construct a PC (personal computer)-based image processing system with similar or superior functions to popular instruments specially designed for image processing for high-speed computers rather than PC, through setting up two boards (HyPER FRAME+ and HyPER ViSion+) on the PC such as NEC's PC9801 series. We develop a C software library which consists of various functions suitable for image processing, and use the conversational method of the menu selection by ”mouse”. This system can be applied to the construction of image database and the measurement of shape characters of animals, plants, etc. In part 1, we introduce and demonstrate the ideas, the methods and the usages of 1) digitization of the video pictures, 2) presentation and preservation of the image data, 3) edition of digital image such as ”copy”, ”move”, ”delete” and ”paint”.