This tutorial paper overviews recent developments in optimization-based approaches for resource allocation problems in wireless systems. We begin by overviewing important results in the area of ...opportunistic (channel-aware) scheduling for cellular (single-hop) networks, where easily implementable myopic policies are shown to optimize system performance. We then describe key lessons learned and the main obstacles in extending the work to general resource allocation problems for multihop wireless networks. Towards this end, we show that a clean-slate optimization-based approach to the multihop resource allocation problem naturally results in a "loosely coupled" cross-layer solution. That is, the algorithms obtained map to different layers transport, network, and medium access control/physical (MAC/PHY) of the protocol stack, and are coupled through a limited amount of information being passed back and forth. It turns out that the optimal scheduling component at the MAC layer is very complex, and thus needs simpler (potentially imperfect) distributed solutions. We demonstrate how to use imperfect scheduling in the cross-layer framework and describe recently developed distributed algorithms along these lines. We conclude by describing a set of open research problems
Caching plays an important role in reducing the backbone traffic when serving high-volume multimedia content. Recently, a new class of coded caching schemes have received significant interest, ...because they can exploit coded multi-cast opportunities to further reduce backbone traffic. Without considering file popularity, prior works have characterized the fundamental performance limits of coded caching through a deterministic worst-case analysis. However, when heterogeneous file popularity is considered, there remain open questions regarding the fundamental limits of coded caching performance. In this paper, for an arbitrary popularity distribution, we first derive a new information-theoretic lower bound on the expected transmission rate of any coded caching schemes. We then show that a simple coded-caching scheme attains an expected transmission rate that is at most a constant factor away from the lower bound. Unlike other existing studies, the constant factor that we derived is independent of the popularity distribution.
In this paper, we characterize the performance of an important class of scheduling schemes, called greedy maximal scheduling (GMS), for multihop wireless networks. While a lower bound on the ...throughput performance of GMS has been well known, empirical observations suggest that it is quite loose and that the performance of GMS is often close to optimal. In this paper, we provide a number of new analytic results characterizing the performance limits of GMS. We first provide an equivalent characterization of the efficiency ratio of GMS through a topological property called the local-pooling factor of the network graph. We then develop an iterative procedure to estimate the local-pooling factor under a large class of network topologies and interference models. We use these results to study the worst-case efficiency ratio of GMS on two classes of network topologies. We show how these results can be applied to tree networks to prove that GMS achieves the full capacity region in tree networks under the K -hop interference model . Then, we show that the worst-case efficiency ratio of GMS in geometric unit-disk graphs is between 1 / 6 and 1 / 3 .
In this paper, we study utility maximization problems for communication networks where each user (or class) can have multiple alternative paths through the network. This type of multi-path utility ...maximization problems appear naturally in several resource allocation problems in communication networks, such as the multi-path flow control problem, the optimal quality-of-service (QoS) routing problem, and the optimal network pricing problem. We develop a distributed solution to this problem that is amenable to online implementation. We analyze the convergence of our algorithm in both continuous-time and discrete-time, and with and without measurement noise. These analyses provide us with guidelines on how to choose the parameters of the algorithm to ensure efficient network control.
To realize mobile virtual reality (VR) group gaming services which are currently hampered by the prohibitive bandwidth and the stringent delay requirements, we investigate the problem of provisioning ...such services using the emerging mobile edge cloudlet (MEC) networks with a distributed content rendering architecture. The underlying dynamic rendering-module placement problem requires to optimize the service's operational cost and the users' end-to-end performance, involving multiple intertwined conflicting system objectives that are discrete, nonconvex, and higher degree polynomial functions with coupled decisions and arbitrary user dynamics over time. We solve this online placement problem by leveraging model predictive control (MPC) and overcoming the aforementioned challenges over each prediction window. We explore the connection between the placement problem and the minimal <inline-formula> <tex-math notation="LaTeX">s </tex-math></inline-formula>-<inline-formula> <tex-math notation="LaTeX">t </tex-math></inline-formula> cut problem in graph theory and solve the former via solving a series of instances of the latter. We formally prove the performance guarantee of our approach. We also conduct extensive trace-driven evaluations and demonstrate the superior practical performance of our MPC-based approach compared to the de facto practices and the state-of-the-art alternatives.
In order to reduce the energy cost of data centers, recent studies suggest distributing computation workload among multiple geographically dispersed data centers by exploiting the electricity price ...difference. However, the impact of data center load redistribution on the power grid is not well understood yet. This paper takes the first step toward tackling this important issue by studying how the power grid can take advantage of the data centers' load distribution proactively for the purpose of power load balancing. We model the interactions between power grid and data centers as a two-stage problem where the utility company chooses proper pricing mechanisms to balance the electric power load in the first stage and the data centers seek to minimize their total energy cost by responding to the prices in the second stage. We show that the two-stage problem is a bilevel quadratic program, which is NP-hard and cannot be solved using standard convex optimization techniques. We introduce benchmark problems to derive upper and lower bounds for the solution of the two-stage problem. We further propose a branch and bound algorithm to attain the globally optimal solution, and propose a heuristic algorithm with low computational complexity to obtain an alternative close-to-optimal solution. We also study the impact of background load prediction error using the theoretical framework of robust optimization. The simulation results demonstrate that our proposed scheme can not only improve the power grid reliability, but also reduce the energy cost of data centers.
In this paper, we study how to utilize forecasts to design online electrical vehicle (EV) charging algorithms that can attain strong performance guarantees. We consider the scenario of an aggregator ...serving a large number of EVs together with its background load, using both its own renewable energy (for free) and the energy procured from the external grid. The goal of the aggregator is to minimize its peak procurement from the grid, subject to the constraint that each EV has to be fully charged before its deadline. Further, the aggregator can predict the future demand and the renewable energy supply with some levels of uncertainty. We show that such prediction can be very effective in reducing the competitive ratios of online control algorithms, and even allow online algorithms to achieve close-to-offline-optimal peak. Specifically, we first propose a 2-level increasing precision model (2-IPM), to model forecasts with different levels of accuracy. We then develop a powerful computational approach that can compute the optimal competitive ratio under 2-IPM over any online algorithm, and also online algorithms that can achieve the optimal competitive ratio. Simulation results show that, even with up to 20% day-ahead prediction errors, our online algorithms still achieve competitive ratios fairly close to 1, which are much better than the classic results in the literature with a competitive ratio of e. The second contribution of this paper is that we solve a dilemma for online algorithm design, e.g., an online algorithm with good competitive ratio may exhibit poor average-case performance. We propose a new Algorithm-Robustification procedure that can convert an online algorithm with good average-case performance to one with both the optimal competitive ratio and good average-case performance. We demonstrate via trace-based simulations the superior performance of the robustified version of a well-known heuristic algorithm based on model predictive control.
In this paper, we are interested in minimizing the delay and maximizing the lifetime of event-driven wireless sensor networks for which events occur infrequently. In such systems, most of the energy ...is consumed when the radios are on, waiting for a packet to arrive. Sleep-wake scheduling is an effective mechanism to prolong the lifetime of these energy-constrained wireless sensor networks. However, sleep-wake scheduling could result in substantial delays because a transmitting node needs to wait for its next-hop relay node to wake up. An interesting line of work attempts to reduce these delays by developing ¿anycast¿-based packet forwarding schemes, where each node opportunistically forwards a packet to the first neighboring node that wakes up among multiple candidate nodes. In this paper, we first study how to optimize the anycast forwarding schemes for minimizing the expected packet-delivery delays from the sensor nodes to the sink. Based on this result, we then provide a solution to the joint control problem of how to optimally control the system parameters of the sleep-wake scheduling protocol and the anycast packet-forwarding protocol to maximize the network lifetime, subject to a constraint on the expected end-to-end packet-delivery delay. Our numerical results indicate that the proposed solution can outperform prior heuristic solutions in the literature, especially under practical scenarios where there are obstructions, e.g., a lake or a mountain, in the coverage area of the wireless sensor network.
Intrahepatic cholangiocarcinoma (ICC) has a poor prognosis and 40%-60% of patients present with advanced disease at the time of diagnosis. Transarterial chemoembolization (TACE) and hepatic arterial ...infusion chemotherapy (HAIC) have recently been used in unresectable ICC. The aim of this study was to compare the survival differences of unresectable ICC patients after TACE and HAIC treatment.
Between March 2011 and October 2019, a total of 126 patients with unresectable ICC, as evident from biopsies and imaging, and who had received TACE or HAIC were enrolled in this study. Baseline characteristics and survival differences were compared between the TACE and HAIC treatment groups.
ICC Patients had significantly higher survival rates after the HAIC treatment, compared with those after TACE treatment 1-year overall survival (OS) rates: 60.2% vs. 42.9%, 2-year OS rates: 38.7% vs. 29.4%, P=0.028; 1-year progression-free survival (PFS) rates: 15.0% vs. 20.0%, 2-year PFS rates: 0% vs. 0%, P=0.641; 1-year only intrahepatic PFS (OIPFS) rates: 35.0% vs. 24.4%, 2-year OIPFS rates: 13.1% vs. 14.6%, P = 0.026. Multivariate Cox regression analysis showed that HAIC was a significant and independent factor for OS and OIPFS in the study cohort.
HAIC is superior to TACE for treatment of unresectable ICC. A new tumor response evaluation procedure for HAIC treatment in unresectable ICC patients is needed to provide better therapeutic strategies. A randomized clinical trial comparing the survival benefits of HAIC and TACE is therefore being considered.
With the increase in cancer survivors, more pancreatic ductal adenocarcinomas (PDACs) are developing as second primary cancers. Whether a prior cancer has an inferior impact on survival outcomes in ...patients with PDAC remains unknown, and the validity of criteria used to exclude patients with prior cancers in clinical trials needs to be determined. The aim of this study was to evaluate the prognostic factors and assess the survival impact of a prior cancer in patients with second primary PDAC.
Patients with PDAC were retrospectively selected from the Surveillance, Epidemiology, and End Results (SEER) database. Overall survival (OS) and cancer-specific mortality rates were compared between patients with and those without prior cancer.
The data of 9235 patients with PDAC from 2004 to 2015 were retrieved from the SEER database, consisting of 438 (4.74%) patients with a prior cancer and 8797 (95.26%) patients without a prior cancer, the patients were then pair-matched using propensity score matching (PSM) analysis. The median OS rates were 7 months for both groups of patients with PDAC with and without prior cancer. These two groups of patients had similar survival rates and cancer-specific mortalities before and after the PSM analysis. In the multivariate analysis, a history of prior cancer was not a significant prognostic factor of OS in patients with PDAC.
Patients with PDAC who had a prior cancer had similar OS and cancer-specific mortality rates as those of patients without a prior cancer. The inclusion of patients with a prior cancer in the clinical trials of PDAC should be considered.