The aim of the article is to compare the performance of five popular virtualization solutions used in the private cloud model. The analysis was conducted for five virtualizers: Proxmox, OpenVZ, ...OpenNebula, Vmware ESX, Xen Server. OpenSSL, GeekBench and Phoronixtest-suite tools were used to perform performance tests. The comparison included selected encryption methods, data block size, and compilation speed
Cloud manufacturing is emerging as a new manufacturing paradigm as well as an integrated technology, which is promising in transforming today's manufacturing industry towards service-oriented, highly ...collaborative and innovative manufacturing in the future. In order to better understand cloud manufacturing, this paper provides a critical review of relevant concepts and ideas in cloud computing as well as advanced manufacturing technologies that contribute to the evolution of cloud manufacturing. The key characteristics of cloud manufacturing are also presented in order to clarify the cloud manufacturing concept. Furthermore, a four-process structure is proposed to describe the typical scenario in cloud manufacturing, hoping to provide a theoretical reference for practical applications. Finally, an application case of a private cloud manufacturing system for a conglomerate is presented.
•Organizations align IT-based capabilities with cloud delivery options to meet performance objectives.•Managerial, technical and relational IT capability positively affect cloud success.•Cloud ...success positively affects firm performance.•The moderating effect of cloud strategy (public, private and hybrid) was partially significant.
Our study examines the effect of relational, managerial and technical IT-based capabilities on cloud computing success; and analyzes how this success impacts firm performance with respect to the processes and operations supported by cloud computing. Additionally, we investigated the complex relationships that exist between IT capabilities and the public, private and hybrid cloud delivery models. Data from a sample of 302 organizations were collected to empirically test our model. The results indicate that a relational IT capability is the most influential factor to facilitate cloud success compared to technical and managerial IT capabilities. Furthermore, an evaluation of the interrelationships indicates that the public and hybrid cloud delivery models may be more dependent on relational IT capabilities for cloud success while the flexibility and agility of the firm's internal IT (technical IT capability) facilitates the public cloud. We discuss how IT-based capabilities may be used to leverage cloud delivery models to positively influence the successful implementation of cloud computing, and ultimately, firm performance for the processes and operations supported by the cloud.
•A solution for trusted detection of unknown ransomware in VMs is proposed.•Valuable data is extracted from the VM's memory dump using the Volatility framework.•General descriptive features are ...proposed and successfully leveraged by ML algorithms.•The solution was rigorously evaluated using notorious and professional ransomwares.•The Random Forest classifier successfully detected known and unknown ransomware.
Cloud computing is one of today's most popular and important IT trends. Currently, most organizations use cloud computing services (public or private) as part of their computer infrastructure. Virtualization technology is at the core of cloud computing, and virtual resources, such as virtual servers, are commonly used to provide services to the entire organization. Due to their importance and prevalence, virtual servers in an organizational cloud are constantly targeted by cyber-attackers who try to inject malicious code or malware into the server (e.g., ransomware). Many times, server administrators are not aware that the server has been compromised, despite the presence of detection solutions on the server (e.g., antivirus engine). In other cases, the breach is detected after a long period of time when significant damage has already occurred. Thus, detecting that a virtual server has been compromised is extremely important for organizational security. Existing security solutions that are installed on the server (e.g., antivirus) are considered untrusted, since malware (particularly sophisticated ones) can evade them. Moreover, these tools are largely incapable of detecting new unknown malware. Machine learning (ML) methods have been shown to be effective at detecting malware in various domains. In this paper, we present a novel methodology for trusted detection of ransomware in virtual servers on an organization's private cloud. We conducted trusted analysis of volatile memory dumps taken from a virtual machine (memory forensics), using the Volatility framework, and created general descriptive meta-features. We leveraged these meta-features, using machine learning algorithms, for the detection of unknown ransomware in virtual servers. We evaluated our methodology extensively in five comprehensive experiments of increasing difficulty, on two different popular servers (IIS server and an email server). We used a collection of real-world, professional, and notorious ransomware and a collection of legitimate programs. The results show that our methodology is able to detect anomalous states of a virtual machine, as well as the presence of both known and unknown ransomware, obtaining the following results: TPR = 1, FPR = 0.052, F-measure = 0.976, and AUC = 0.966, using the Random Forest classifier. Finally, we showed that our proposed methodology is also capable of detecting an additional type of malware known as a remote access Trojan (RAT), which is used to attack organizational VMs.
With the development of science and technology, the computer age has come. As a new type of computer network application technology, cloud computing technology is of great significance to promote ...China's economic development. However, in the development of cloud technology, it faces serious security problems. Although cloud providers all claim that their services are very secure in all aspects, especially in data management. However, for large enterprises, business-related data is their lifeline. Large enterprises will not put their important applications on the public cloud for operation, and private cloud has great advantages in this respect. This paper introduces the relevant concepts and network security deployment methods of cloud technology and private cloud security platform. The research on private cloud security platform is to better resist the next generation of security threats. Deploying network security on private cloud security platform can reduce the informatization cost of enterprises, public institutions and government departments and improve work efficiency.
Analyzing compute functions by utilizing the IAAS model for private cloud computing services in packstack development is one of the large-scale data storage solutions. Problems that often occur when ...implementing various applications are the increased need for server resources, the monitoring process, performance efficiency, time constraints in building servers and upgrading hardware. These problems have an impact on long server downtime. The development of private cloud computing technology could become a solution to the problem. This research employed Openstack and Packstack by applying one server controller node and two servers compute nodes. Server administration with IAAS and self-service approaches made scalability testing simpler and time-efficient. The resizing of the virtual server (instance) that has been carried out in a running condition shows that the measurement of the overhead value in private cloud computing is more optimal with a downtime of 16 seconds.
Disaster recovery (DR) plays a vital role in restoring the organization's data in the case of emergency and hazardous accidents. While many papers in security focus on privacy and security ...technologies, few address the DR process, particularly for a Big Data system. However, all these studies that have investigated DR methods belong to the “single-basket” approach, which means there is only one destination from which to secure the restored data, and mostly use only one type of technology implementation. We propose a “multi-purpose” approach, which allows data to be restored to multiple sites with multiple methods to ensure the organization recovers a very high percentage of data close to 100%, with all sites in London, Southampton and Leeds data recovered. The traditional TCP/IP baseline, snapshot and replication are used with their system design and development explained. We compare performance between different approaches and multi-purpose approach stands out in the event of emergency. Data at all sites in London, Southampton and Leeds can be restored and updated simultaneously. Results show that optimize command can recover 1 TB of data within 650 s and command for three sites can recover 1 TB of data within 1360 s. All data backup and recovery has failure rate of 1.6% and below. All the data centers should adopt multi-purpose approaches to ensure all the data in the Big Data system can be recovered and retrieved without experiencing a prolong downtime and complex recovery processes. We make recommendations for adopting “multi-purpose” approach for data centers, and demonstrate that 100% of data is fully recovered with low execution time at all sites during a hazardous event as described in the paper.
The virtual network embedding problem is embedding virtual networks (VNs) in a substrate network so that revenue or accept ratio is maximized. Previous study usually assumes disclosed communication ...demand among the virtual nodes in a VN, mismatching real-world cloud computing scenarios. In this paper, we propose a new VN abstraction based on the widely used Virtual Private Cloud model, where internal communication demand is unknown to cloud providers. In contrast with the majority of existing research, we allow the co-location of the virtual nodes belonging to the same VN, and introduce the concept of switching capacity for practical resource reservation. We categorize the substrate resources in cloud data centers into additive and non-additive for the first time, and devise our algorithms accordingly. After formulating the problem, we propose a solution framework named HA-D3QN (Heuristic Assisted Dueling Double Deep Q Network). Essentially, HA-D3QN selects the best responses to different system states by combining the D3QN deep reinforcement learning structure and the candidate actions, which are generated by our proposed heuristic algorithms for addressing the exponentially large action space. Finally, we conduct extensive simulation experiments, the results of which verify the effectiveness of our approach.
•Propose a novel virtual network abstraction based on Virtual Private Cloud.•Introduce the concept of switching capacity for practical resource estimation.•Categorize substrate resources into additive and non-additive for the first time.•Raise a deep reinforcement learning based framework for virtual network embedding.•Propose three heuristic algorithms for candidate action generation.