The next-generation data center introduces the refactoring of the traditional data center in order to create pools of disaggregated resource units, such as processors, memory, storage, network, ...power, and cooling sources, named composable system (CSs) with the purpose of offering flexibility, automation, optimization, and scalability. In this paper, we solve an optimization problem to allocate CSs considering next- generation data centers. The main goal is to maximize the CS availability for the application owner, having its minimum requirements (in terms of CPU, memory, network and storage), and available budget as restrictions. This problem is modeled as a bounded multidimensional knapsack problem, and we solve it using Dynamic Programming (DP), and two Soft Computing approaches: Differential Evolution (DE) and Particle Swarm optimization (PSO). We consider two different scenarios in order to analyze heterogeneity and variability aspects when allocating CSs in a data center. Moreover, we also analyze the importance of system components to give directions and priorities of actions to upgrade the system design.
The Creative Cities movement looks for creative solutions to environmental issues, or inventive ways for people. One of the main objectives of the Creative Cities is to create dynamic and attractive ...spaces to citizens. For this, information technology can (and should) help in creation of more dynamic environments. Augmented Reality is a technology that overlays virtual objects in the real world, and can be used for this purpose. One way to accomplish Augmented Reality is 3D reconstruction of the environment. This work presents a comparison of libraries for 3D reconstruction, with the objective of developing an application of RA using 3D reconstruction in the context of Creative Cities. In addition, this study also compares and analyzes the performance of virtualization tools (KVM and Virtual Box) when executing this type of application in a cloud infrastructure.
The world population's life expectancy has gradually increased. According to the World Health Organization (WHO), the life expectation will reach 90 years by 2030, and this quality of life is one of ...most important aging aspects. The academic and business communities are devoting many efforts to develop new applications that promote quality of life for this portion of the population; services, such as vital signs monitoring, fall detection systems, heart attacks, among others, are increasingly in evidence. Most of these e-health systems are focused on intelligent devices - Internet of Things (IoT). However, IoT by itself is not able to process, store and guarantee the quality of service of these services due to hardware capacity limitations. So, to mitigate this issue, IoT has two major allies in order to be able to provide e-health services with high availability and quality, fog and cloud computing. This paper presents in progress e-health architecture using IoT for data acquisition, fog for data pre-processing and short-term storage, and cloud for data processing, analyze and long-term storage. We also describe main challenges to provide an e-health application with high availability, high performance and accessibility, at low deployment and maintenance cost.
Recent moves to consider misogyny as a hate crime have refocused efforts for owners of web properties to detect and remove misogynistic speech. This paper considers the use of deep learning ...techniques for detection of misogyny in Urban Dictionary, a crowdsourced online dictionary for slang words and phrases. We compare the performance of two deep learning techniques, Bi-LSTM and Bi-GRU, to detect misogynistic speech with the performance of more conventional machine learning techniques, logistic regression, Naive-Bayes classification, and Random Forest classification. We find that both deep learning techniques examined have greater accuracy in detecting misogyny in the Urban Dictionary than the other techniques examined.
In this paper, we propose the use of reinforcement learning to deploy a service function chain (SFC) of cellular network service and manage the VNFs operation. We consider that the SFC is deployed by ...the reinforcement learning agent considering a scenario with distributed data centers, where the virtual network functions (VNFs) are deployed in virtual machines in commodity servers. The VNF management is related to create, delete, and restart the VNFs. The main purpose is to reduce the number of lost packets taking into account the energy consumption of the servers. We use the Proximal Policy Optimization (PPO2) algorithm to implement the agent and preliminary results show that the agent is able to allocate the SFC and manage the VNFs, reducing the number of lost packets.
Chronic diseases are growing exponentially. Today, there are over 900 million individuals suffering with some chronic diseases around the world. For this reason, e-health systems are being developed ...to design a better quality of life for patients. For instance, we have systems for detection of epilepsy, monitoring of vital signs, control of diabetes, among others. Deep learning has being an important technique embedded in these systems to predict and sort the data without the need of a 24-hour monitoring specialist. However, by combining e-health systems and deep learning techniques also brings several challenges that need to be overcome. Based on this context, this work-in-progress proposes an e-health system based on fog and cloud computing, using deep learning to predict epileptic seizures. We also present some research challenges for this implementation.
Increased complexity in IT, big data, and advanced analytical techniques are some of the trends driving demand for more sophisticated and scalable search technology. Despite Quality of Service (QoS) ...being a critical success factor in most enterprise software service offerings, it is often not a generic component of the enterprise search software stack. In this paper, we explore enterprise search engine dependability and performance using a real-world company architecture and associated data sourced from an ElasticSearch implementation on Linknovate.com. We propose a Fault Tree model to assess the availability and reliability of the Linknovate.com architecture. The results of the Fault Tree model are fed into a Stochastic Petri Net (SPN) model to analyze how failures and redundancy impact application performance of the use case system. Availability and MTTF were used to evaluate the reliability and throughput was used to evaluate the performance of the target system. The best results for all three metrics were returned in scenarios with high levels of redundancy.
To meet service level agreement (SLA) requirements, the majority of enterprise IT infrastructure is typically overprovisioned, underutilized, non-compliant and lacking in required agility resulting ...in significant inefficiencies. As enterprises introduce and migrate to next-generation applications designed to be horizontally scalable, they require infrastructure that can manage the duality of legacy and next generation application requirements. To address this, composable data center infrastructure disaggregates and refactors compute, storage, network and other infrastructure resources in to shared resources pools that can be "composed" and allocated on-demand. In this paper, we model a theorical problem of resource allocation in a composable data center infrastructure as a bounded multidimensional knapsack and then apply multi-objective optimization algorithms, Non-dominated Sorting Genetic Algorithm (NSGA-II) and Generalized Differential Evolution (GDE3), to allocate resources efficiently. The main goal is to maximize resource availability for the application owner, while meeting minimum requirements (in terms of CPU, memory, network, and storage) within budget constraints. We consider two different scenarios to analyze heterogeneity and variability aspects when allocating resources on composable data center infrastructure.
Many enterprises rely on cloud infrastructure to host their critical applications (such as trading, banking transaction, airline reservation system, and credit card authorization). The unavailability ...of these applications may lead to severe consequences that go beyond the financial losses, reaching the cloud provider reputation too. However, to maintain high availability in a cloud data center is a difficult task due to its complexity. The power subsystem is crucial for the entire operation of the data center because it supplies power for all other subsystems, including IT components and cooling equipment. Some studies have already proposed models to evaluate the availability of the power subsystem, but none of them are based on standard redundancy models. Standards guide cloud providers regarding availability, points of failure, and watts per square foot based on components' redundancy. This paper proposes RBD and Petri Net models based on the TIA-942 standard to estimate the availability of the data center power subsystem and analyze how failures on power subsystem impact the availability of critical applications. These models are important to resource planning and decision making by the cloud providers, because they may identify which components they ought to invest in order to improve the availability level.