Cooperative co-evolution (CC) is an evolutionary computation framework that can be used to solve high-dimensional optimization problems via a "divide-and-conquer" mechanism. However, the main ...challenge when using this framework lies in problem decomposition. That is, deciding how to allocate decision variables to a particular subproblem, especially interacting decision variables. Existing decomposition methods are typically computationally expensive. In this paper, we propose a new decomposition method, which we call recursive differential grouping (RDG), by considering the interaction between decision variables based on nonlinearity detection. RDG recursively examines the interaction between a selected decision variable and the remaining variables, placing all interacting decision variables into the same subproblem. We use analytical methods to show that RDG can be used to efficiently decompose a problem, without explicitly examining all pairwise variable interactions. We evaluated the efficacy of the RDG method using large scale benchmark optimization problems. Numerical simulation experiments showed that RDG greatly improved the efficiency of problem decomposition in terms of time complexity. Significantly, when RDG was embedded in a CC framework, the optimization results were better than results from seven other decomposition methods.
► Provision of low cost cooling for data centers is highly sought with the increase of cost of electricity and heat dissipation volumes. ► Many Australian cities enjoy cool and dry weather conditions ...during winter months. ► Data center operators can exploit these conditions by introducing outside air into their data centers and save cost of cooling. ► This paper evaluates the potential of using economizers for data centre cooling by analyzing hourly weather data of past 12years. ► We identified that the capital cities in Southern states of Australia have sizable potential for using economizers above 60% of the time in a typical year.
The provision of low cost cooling is challenging due to high energy costs and increasing heat dissipation volumes from data centers. In Australia, with the introduction of emissions trading scheme, the cost of energy with carbon origin is expected to increase. In order to reduce cooling costs, alternative low cost cooling methods for data centers are highly sought after. In this study, we investigated using air-side economizers to introduce outside air with desired supply air conditions, by exploiting the cool and dry Australian climate conditions. Our approach was based on analyzing the hourly temperature and humidity data gathered over past 12years for 20 weather monitoring stations across Australia representing all geographical regions and determining the potential of using air-side economizers for those locations. As the result, we demonstrated that there is a sizable potential for using air-side economizers in some states that could lead to significant savings on cooling costs to data center operators.
Selecting the most appropriate algorithm to use when attempting to solve a black-box continuous optimization problem is a challenging task. Such problems typically lack algebraic expressions, it is ...not possible to calculate derivative information, and the problem may exhibit uncertainty or noise. In many cases, the input and output variables are analyzed without considering the internal details of the problem. Algorithm selection requires expert knowledge of search algorithm efficacy and skills in algorithm engineering and statistics. Even with the necessary knowledge and skills, success is not guaranteed.
In this paper, we present a survey of methods for algorithm selection in the black-box continuous optimization domain. We start the review by presenting Rice’s (1976) selection framework. We describe each of the four component spaces – problem, algorithm, performance and characteristic – in terms of requirements for black-box continuous optimization problems. This is followed by an examination of exploratory landscape analysis methods that can be used to effectively extract the problem characteristics. Subsequently, we propose a classification of the landscape analysis methods based on their order, neighborhood structure and computational complexity. We then discuss applications of the algorithm selection framework and the relationship between it and algorithm portfolios, hybrid meta-heuristics, and hyper-heuristics. The paper concludes with the identification of key challenges and proposes future research directions.
•Discussed thermal challenges of data centers.•Reviewed thermal management techniques of data centers from chip to the cooling system.•Compered various modelling and optimization methods of ...air-cooled data centers.•Compered technical characteristics of various direct and indirect liquid-cooled data centers.•Reviewed thermal performance of various air- and water-side economizers.
The growing global demand for services offered by data centers (DCs) has increased their total power consumption and carbon emissions. Recent figures revealed that DCs account for around 2% of total US electrical power consumption, approximately 40% of which is for powering their cooling systems. A high portion of energy spent on cooling is typically due to the inherent inefficiency of the heat removal process existing in multi-level from microchip to the cooling infrastructure level. Depending on the type of cooling system, air-cooled or liquid-cooled, this inefficiency can be significantly improved upon by utilizing various thermal management and efficiency enhancement techniques at each level. This paper reviews the state-of-the-art of multi-level thermal management techniques for both air- and liquid-cooled DCs. The main focus is on the sources of inefficiencies and the improvement methods with their configuration features and performances at each level. For the air-cooled DCs, various advanced methods for the chip, server, rack, plenum and room levels have been reviewed. Recent advances in thermal modelling of air-cooled DCs and their energy optimization methods have also been broadly reviewed. Furthermore, the performance of various methods such as pool boiling, jet impingement, and spray cooling for direct liquid-cooled DCs and single-phase, two-phase and heat pipe cooling for indirect liquid-cooled DCs have been compared. Finally, free cooling as an energy efficient method in reducing total power consumption of DCs’ cooling system has been reviewed in this paper. The advancements in two main types of free cooling methods, air-side and water-side economizers, are discussed and their performance characteristics are compared.
•Developed an energy, exergy, environment and economic model for data center’s cooling system.•Annual and monthly saving potential of economizers depend on their type and location.•Achieved 4–84% ...annual savings in the cooling energy consumption of data centers.•Achieved 8–80% annual savings in the cooling exergy destruction of data centers.•Achieved 8–75% and 10–85% annual savings in CO2 emission and cooling costs, respectively.
One of the main concerns in designing and operating data centers is the significant energy consumption of the cooling systems. Over the past years, developing and applying more energy efficient cooling strategies to reduce the environmental footprint and the total cost of ownership of data centers has drawn considerable attention. Cooling systems with air-side economizer cycle provide one of the most efficient methods. Their effectiveness in terms of energy and exergy, and the environmental and economical impact have been analyzed quantitatively in this research. Since the effectiveness highly depends on the local climate conditions, the analysis has been carried out for 23 locations across Australia. First, a detailed cooling load of the data center has been estimated by considering the workload and heat generation characteristics of the running IT equipment. Then, a comprehensive energy, exergy, environment and economic model of nine different air-side economizer cycles has been developed. Finally, the effectiveness and profitability of each economizer cycle has been compared to a conventional cooling system by evaluating the annual and monthly saving potential at each location. The air-side economizers can yield maximum savings of 84%, 80%, 75% and 85% in the annual cooling energy consumption, exergy destruction, CO2 emission and cooling costs, respectively. The power usage effectiveness (PUE) of the data centers can be also reduced from an average of 1.42 to 1.22. These savings and efficiency enhancements are highly correlated with the type and geographic location of the air-side economizers. In general, the saving potential increases as we move further south in Australia, due to more favorable climate conditions.
One of the main types of genetic variations in cancer is Copy Number Variations (CNV). Whole exome sequencing (WES) is a popular alternative to whole genome sequencing (WGS) to study disease specific ...genomic variations. However, finding CNV in Cancer samples using WES data has not been fully explored.
We present a new method, called CoNVEX, to estimate copy number variation in whole exome sequencing data. It uses ratio of tumour and matched normal average read depths at each exonic region, to predict the copy gain or loss. The useful signal produced by WES data will be hindered by the intrinsic noise present in the data itself. This limits its capacity to be used as a highly reliable CNV detection source. Here, we propose a method that consists of discrete wavelet transform (DWT) to reduce noise. The identification of copy number gains/losses of each targeted region is performed by a Hidden Markov Model (HMM).
HMM is frequently used to identify CNV in data produced by various technologies including Array Comparative Genomic Hybridization (aCGH) and WGS. Here, we propose an HMM to detect CNV in cancer exome data. We used modified data from 1000 Genomes project to evaluate the performance of the proposed method. Using these data we have shown that CoNVEX outperforms the existing methods significantly in terms of precision. Overall, CoNVEX achieved a sensitivity of more than 92% and a precision of more than 50%.
In light of the increasing adoption of targeted resequencing (TR) as a cost-effective strategy to identify disease-causing variants, a robust method for copy number variation (CNV) analysis is needed ...to maximize the value of this promising technology.
We present a method for CNV detection for TR data, including whole-exome capture data. Our method calls copy number gains and losses for each target region based on normalized depth of coverage. Our key strategies include the use of base-level log-ratios to remove GC-content bias, correction for an imbalanced library size effect on log-ratios, and the estimation of log-ratio variations via binning and interpolation. Our methods are made available via CONTRA (COpy Number Targeted Resequencing Analysis), a software package that takes standard alignment formats (BAM/SAM) and outputs in variant call format (VCF4.0), for easy integration with other next-generation sequencing analysis packages. We assessed our methods using samples from seven different target enrichment assays, and evaluated our results using simulated data and real germline data with known CNV genotypes.
Recent increases in the global demand for IT services have increased the power consumption, total ownership costs and environmental footprint of data centers. Recent efforts to reduce these effects ...have focused on either their cooling systems, or their power systems. In this paper, we have developed an integrated approach to minimize the total power demand of data centers, whilst their reliance on power imported from the grid is minimized. First, the power demand of data center has been reduced utilizing various air-side economizer-based cooling systems. Since the effectiveness of economizers significantly depends on the local weather conditions, 42 stations in major cities across the world have been considered. A more than 80% reduction in total cooling power consumption is achieved by using the most appropriate air-side economizer at each location. Second, the reliance of data centers on power imported from the grid is minimized utilizing on-site hybrid renewable power generation and energy storage. The on-site renewable power generation and capacity factors have been calculated for 1 MW wind and solar renewable power plants to identify the location with the highest renewable power generation capability. The optimal size of a hybrid renewable power plant, and associated battery energy storage system, is also determined for each data center using linear programming to minimize total levelized costs. Finally, the optimal location for constructing and operating the most energy efficient, cost-effective and sustainable data center has been identified by calculating its level of independence from the power grid. It is found that the level of grid independence increases as we move away from the equator, for example more than 50% grid independence can be achieved at Regina station located in Canada.
•Developed an energy model for various data center's cooling systems across the world.•Designed an on-site hybrid wind-solar-battery power plant for each data center.•Optimized the size of hybrid renewable power plant for 42 stations across the world.•Achieved more than 80% reduction in the total cooling power demand at each station.•Achieved more than 50% power grid independency with the proposed method.
► We modeled an absorption chilller including three-dimension heat and mass diffusion. ► We examined nonuniform concentration and temperature distributions in the solution. ► The simulation results ...are consistent with the experimental readings. ► The performance was analyzed against the cooling and firing water temperatures.
One of the main drawbacks in modeling absorption chillers is the lack of justified hypotheses of heat and mass diffusion in an annular flow on the outer surface of horizontal tubes. Heat and mass transfer in diffusion is a three-dimensional problem with vector characteristics. This paper introduces the characterization of vapor–absorbent heat and mass transfer phenomena in three dimensional space to obtain steady state simulation results for single effect LiBr–H2O absorption chillers.
It is thus possible for the first time to ascertain in the simulation that the heat and mass transfer characteristics are not uniform throughout the solution film. The diffusion boundary layer gradually becomes thicker towards the downstream tubes, which has previously been assumed but never confirmed in simulations. The radial component of the concentration field exhibits a potential for determining an optimum film thickness which enhances the system performance. The ability to use the proposed model with various absorbents and different cooling capacities increases its applicability. The model also offers the possibility to analyze different firing techniques at different temperatures. All the simulation results are consistent with the experimental data found in the literature.
Cancer constitutes a momentous health burden in our society. Critical information on cancer may be hidden in its signaling pathways. However, even though a large amount of money has been spent on ...cancer research, some critical information on cancer-related signaling pathways still remains elusive. Hence, new works towards a complete understanding of cancer-related signaling pathways will greatly benefit the prevention, diagnosis, and treatment of cancer.
We propose the node-weighted Steiner tree approach to identify important elements of cancer-related signaling pathways at the level of proteins. This new approach has advantages over previous approaches since it is fast in processing large protein-protein interaction networks. We apply this new approach to identify important elements of two well-known cancer-related signaling pathways: PI3K/Akt and MAPK. First, we generate a node-weighted protein-protein interaction network using protein and signaling pathway data. Second, we modify and use two preprocessing techniques and a state-of-the-art Steiner tree algorithm to identify a subnetwork in the generated network. Third, we propose two new metrics to select important elements from this subnetwork. On a commonly used personal computer, this new approach takes less than 2 s to identify the important elements of PI3K/Akt and MAPK signaling pathways in a large node-weighted protein-protein interaction network with 16,843 vertices and 1,736,922 edges. We further analyze and demonstrate the significance of these identified elements to cancer signal transduction by exploring previously reported experimental evidences.
Our node-weighted Steiner tree approach is shown to be both fast and effective to identify important elements of cancer-related signaling pathways. Furthermore, it may provide new perspectives into the identification of signaling pathways for other human diseases.