Resumo: O Sistema Brasileiro de Inteligencia (Sisbin) é fundamental para o assessoramento em informaçöes ao chefe do poder executivo brasileiro. O objetivo deste trabalho é diagnosticar os desafios e ...oportunidades no processo de comunicaçâo sigilosa nao classificada entre os órgäos do Sisbin. Para isso, foi desenvolvido e aplicado um questionário para 137 pessoas de 13 órgäos do sistema para valorar as informaçöes, obter apreciaçöes sobre a Lei de Acesso å Informaçâo e Segurança da Informaçâo. Os dados foram analisados com base na Doutrina de Inteligencia e legislaçöes específicas que tratam sobre a Segurança da Informaçâo. Os resultados demonstram a necessidade de atualizaçâo da ferramenta para troca de informaçöes, além do tratamento de alguns fatores críticos. Dessa forma, esse trabalho estabelece uma diretriz para uma nova ferramenta que deve: utilizar nuvem privada, definir e implementar seus processos, possibilitar a construçâo conjunta de documentos e prover suporte centralizado.
Abstract
Objective
The Greater Plains Collaborative (GPC) and other PCORnet Clinical Data Research Networks capture healthcare utilization within their health systems. Here, we describe a reusable ...environment (GPC Reusable Observable Unified Study Environment GROUSE) that integrates hospital and electronic health records (EHRs) data with state-wide Medicare and Medicaid claims and assess how claims and clinical data complement each other to identify obesity and related comorbidities in a patient sample.
Materials and Methods
EHR, billing, and tumor registry data from 7 healthcare systems were integrated with Center for Medicare (2011–2016) and Medicaid (2011–2012) services insurance claims to create deidentified databases in Informatics for Integrating Biology & the Bedside and PCORnet Common Data Model formats. We describe technical details of how this federally compliant, cloud-based data environment was built. As a use case, trends in obesity rates for different age groups are reported, along with the relative contribution of claims and EHR data-to-data completeness and detecting common comorbidities.
Results
GROUSE contained 73 billion observations from 24 million unique patients (12.9 million Medicare; 13.9 million Medicaid; 6.6 million GPC patients) with 1 674 134 patients crosswalked and 983 450 patients with body mass index (BMI) linked to claims. Diagnosis codes from EHR and claims sources underreport obesity by 2.56 times compared with body mass index measures. However, common comorbidities such as diabetes and sleep apnea diagnoses were more often available from claims diagnoses codes (1.6 and 1.4 times, respectively).
Conclusion
GROUSE provides a unified EHR-claims environment to address health system and federal privacy concerns, which enables investigators to generalize analyses across health systems integrated with multistate insurance claims.
Today, most organizations employ cloud computing environments for both computational reasons and for storing their critical files and data. Virtual servers are an example of widely used virtual ...resources provided by cloud computing architecture. Therefore, virtual servers are considered an attractive target for cyber-attackers, who launch their attacks by malware such as the well-known remote access trojans (RATs) and more modern malware such as ransomware and cryptojacking. Existing security solutions implemented on virtual servers fail to detect these newly created malware (zero-day attacks). In fact, by the time the security solution is updated, the organization has likely already been attacked. In this study, we present a designated framework aimed at trusted and secured detection of newly created and unknown instances of malware on virtual machines in an organization's private cloud. We took volatile memory dumps from a virtual machine (VM) in a secured and trusted manner, and analyzed all of the data within the memory dumps using the MinHash method; MinHash is well suited for the accurate detection of malware in VMs based on efficient volatile memory dump comparisons. The proposed framework is evaluated in a comprehensive set of experiments of increasing difficulty in which we also measured the detection performance of different classifiers (both similarity and machine learning-based classifiers, using collections of real-world, professional, notorious malware and legitimate applications. The evaluation results show that our framework can detect the anomalous state of a virtual server, as well as known, new, and unknown malware, with very high TPRs (100% for ransomware and RATs) and very low FPRs (1.8% for ransomware and no FPR for RATs). We also show how the methodology's performance can be improved, in terms of required time and storage space, saving more than 86% of these resources. Finally, we demonstrate the generalization capabilities and practicality of our methodology by using transfer learning and learning from just one virtual server in order to detect unknown malware on a different virtual server.
Hypervisor for Virtualization in Private Cloud Závacký, Pavol; Eliáš, Andrej; Stémy, Maximilián
Vedecké práce Materiálovotechnologickej fakulty Slovenskej technickej univerzity v Bratislave so sídlom v Trnave,
8/2015, Volume:
23, Issue:
1
Journal Article
Peer reviewed
Open access
The article deals with testing and choosing right virtualization platform with management tools for building private cloud, in which methods for power control of virtual machines can be applied. Main ...brick in every virtualization platform is hypervisor which carries out virtualization and management tools which deliver the services such as web management, storage management and resources management from one place.
With the rapid growth in medical data, hospitals need to make enormous investments annually to expand computing resources. Cloud computing offers a platform for running medical services. However, ...sharing of medical data with unknown neighbors in the cloud environment may threaten the sensitive data of medical services. Private cloud provides a safety way to protect the sensitive data of medical services. But it is quite different from public cloud, since it is not easy to obtain more resources timely when the unpredictable workload is over the total amount of resources of private cloud. In addition, optimal resource allocation becomes a key issue as medical services possess distinctive features require different kinds of resource combination. In this article, an efficient resource management solution for medical services in hospital information system based on private cloud is proposed. We use intelligent control theory to adjust the resource allocation based on the dynamic workload adaptively, that effectively utilizes the limited resources of the private cloud while ensures the quality of services. The experiment results suggest that the proposed solution enables the efficient application of resources and reactions to unpredictable situations, which reduces the IT resources to hospitals.
Power system simulations play a critical role in power grid planning studies. These studies become more computationally intensive as the result of the increasing level of system uncertainty and ...complexity. The deficiency associated with existing on-premise computing infrastructure in meeting these computational needs makes cloud computing an attractive alternative. A pilot project was initiated at ISO New England (ISO-NE) to explore the feasibility and implementation of cloud computing for power system simulations. In this paper, the concept of cloud computing, system architecture design, and cyber security scheme of the developed cloud-computing platform are discussed in details. The case study shows that adopting cloud computing can successfully meet various computing needs in power system planning studies in a cost-effective way without compromising cyber security and data privacy that are equally important to such organizations as ISO-NE.
Most organizations today employ cloud-computing environments and virtualization technology; Due to their prevalence and importance in providing services to the entire organization, virtual-servers ...are constantly targeted by cyber-attacks, and specifically by malware. Existing solutions, consisting of the widely-used antivirus (AV) software, fail to detect newly created and unknown-malware; moreover, by the time the AV is updated, the organization has already been attacked. In this paper, we present a during run-time analysis methodology for a trusted detection of unknown malware on virtual machines (VMs). We conducted trusted analysis of volatile memory dumps taken from a VM and focused on analyzing their system-calls using a sequential-mining-method. We leveraged the most informative system-calls by machine-learning algorithms for the efficient detection of malware in widely used VMs within organizations (i.e. IIS and Email server). We evaluated our methodology in a comprehensive set of experiments over a collections of real-world, advanced, and notorious malware (both ransomware and RAT), and legitimate programs. The results show that our suggested methodology is able to detect the presence of unknown malware, in an average of 97.9% TPR and 0% FPR. Such results and capabilities can form the ground for the development of practical detection-tools for both corporates and companies.
With the development of network infrastructure, a large volume of data will be exchanged with increased bandwidth. Many applications are connecting people to the rest of the world through the public ...network. Thus, privacy and security have become a concern. Under this circumstance, it becomes a trend that the enterprises tend to host their data and services on private clouds dedicated to their own use, rather than the public cloud services. However, in contrary to the well-investigated total cost of ownership (TCO) for public clouds, the analytic research on the cost of purchase and operation for private clouds is still a blank. In this work, we first review the state-of-the-art TCO literature to summarize the models, tools, and cost optimization techniques for public clouds. Based on our survey, we envision the TCO modeling and optimization for private clouds by comparing the differences of features between public and private clouds. Finally, we propose a heuristic algorithm, conflict-aware first-fit to optimize the total cost of ownership of private cloud by minimizing the number of racks when deploying servers.
Pemilihan suatu media penyimpanan saat ini sudah memiliki banyak pilihan, salah satunya adalah media cloud. SARDrive merupakan sistem informasi berbasis cloud yang dibuat oleh komunitas untuk ...mempermudah pengguna berbagi file mereka dengan aman dan cepat secara online dengan pengguna terdaftar lainnya dan memungkinkan mereka untuk melihat, mengunggah, serta mengunduh file. Untuk mengetahui efektivitas SARDrive maka diperlukan suatu pengujian yang nantinya dapat menghasilkan nilai yang dapat diukur. ISO 9126 merupakan standar yang paling penting dalam bidang penjaminan kualitas. Terdapat beberapa faktor kualitas diantaranya Funcionality, Reability, Usability, Efficiency, Maintability, dan Portability. Dari keenam faktor kualitas tersebut, maka penulis memilih 3 faktor yaitu Funcionality, Usability, dan Efficiency untuk pengujian. Hasil pengujian pada SARDrive 87,3% (kriteria sangat layak) untuk aspek Usability, 82,3% (kriteria sangat layak) untuk aspek Efficiency, dan 80,3% (kriteria layak) untuk aspek Funcionality. Kata-kata kunci: Efektifitas, Cloud Computing, Private Cloud Storage, ISO 9126
Interoperability remains the key problem in multi-discipline collaboration based on building information modeling (BIM). Although various methods have been proposed to solve the technical issues of ...interoperability, such as data sharing and data consistency; organizational issues, including data ownership and data privacy, remain unresolved to date. These organizational issues prevent different stakeholders from sharing their data due to concerns regarding losing control of the data. This study proposes a multi-server information-sharing approach on a private cloud after analyzing the requirements for cross-party collaboration to address the aforementioned issues and prepare for massive data handling in the near future. This approach adopts a global controller to track the location, ownership and privacy of the data, which are stored in different servers that are controlled by different parties. Furthermore, data consistency conventions, parallel sub-model extraction, and sub-model integration with model verification are investigated in depth to support information sharing in a distributed environment and to maintain data consistency. Thus, with this approach, the ownership and privacy of the data can be controlled by its owner while still enabling certain required data to be shared with other parties. Application of the multi-server approach for information interoperability and cross-party collaboration is illustrated using a real construction project of an airport terminal. Validation shows that the proposed approach is feasible for maintaining the ownership and privacy of the data while supporting cross-party data sharing and collaboration at the same time, thus avoiding possible legal problems regarding data copyrights or other legal issues.
•The requirements for cross-party collaboration, especially on data ownership and privacy, were investigated in depth.•We proposed a multi-server approach to address the challenges on data ownership and privacy in cross-party collaboration.•A global controller was introduced for data tracking and authorization management for multiple stakeholders.•Algorithms and processes for sub-model extraction and integration in a distributed multi-server environment were proposed.•A prototype system was developed and validated in a large-scale airport project successfully.