As Internet traffic grows exponentially due to the pervasive Internet accesses via mobile devices and increasing adoptions of cloud-based applications, broadband providers start to shift from ...flat-rate to usage-based pricing, which has gained support from regulators such as the FCC. We consider generic congestion-prone network services and study usage-based pricing of service providers under market competition. Based on a novel model that captures users' preferences over price and congestion alternatives, we derive the induced congestion and market share of the service providers under a market equilibrium and design algorithms to calculate them. By analyzing different market structures, we reveal how users' value on usage and sensitivity to congestion influence the optimal price, revenue, and competition of service providers, as well as the social welfare. We also obtain the conditions under which monopolistic providers have strong incentives to implement service differentiation via Paris Metro Pricing and whether regulators should encourage such practices.
Bloom filter (BF) has been widely used to support membership query, i.e., to judge whether a given element <inline-formula> <tex-math notation="LaTeX">{x} </tex-math></inline-formula> is a member of ...a given set <inline-formula> <tex-math notation="LaTeX">{S} </tex-math></inline-formula> or not. Recent years have seen a flourish design explosion of BF due to its characteristic of space-efficiency and the functionality of constant-time membership query. The existing reviews or surveys mainly focus on the applications of BF, but fall short in covering the current trends, thereby lacking intrinsic understanding of their design philosophy. To this end, this survey provides an overview of BF and its variants, with an emphasis on the optimization techniques. Basically, we survey the existing variants from two dimensions, i.e., performance and generalization. To improve the performance, dozens of variants devote themselves to reducing the false positives and implementation costs. Besides, tens of variants generalize the BF framework in more scenarios by diversifying the input sets and enriching the output functionalities. To summarize the existing efforts, we conduct an in-depth study of the existing literature on BF optimization, covering more than 60 variants. We unearth the design philosophy of these variants and elaborate how the employed optimization techniques improve BF. Furthermore, comprehensive analysis and qualitative comparison are conducted from the perspectives of BF components. Lastly, we highlight the future trends of designing BFs. This is, to the best of our knowledge, the first survey that accomplishes such goals.
Within the current Internet, autonomous ISPs implement bilateral agreements, with each ISP establishing agreements that suit its own local objective to maximize its profit. Peering agreements based ...on local views and bilateral settlements, while expedient, encourage selfish routing strategies and discriminatory interconnections. From a more global perspective, such settlements reduce aggregate profits, limit the stability of routes, and discourage potentially useful peering/connectivity arrangements, thereby unnecessarily balkanizing the Internet. We show that if the distribution of profits is enforced at a global level, then there exist profit-sharing mechanisms derived from the coalition games concept of Shapley value and its extensions that will encourage these selfish ISPs who seek to maximize their own profits to converge to a Nash equilibrium. We show that these profit-sharing schemes exhibit several fairness properties that support the argument that this distribution of profits is desirable. In addition, at the Nash equilibrium point, the routing and connecting/peering strategies maximize aggregate network profits and encourage ISP connectivity so as to limit balkanization.
Unlike telephone operators, which pay termination fees to reach the users of another network, Internet content providers (CPs) do not pay the Internet service providers (ISPs) of users they reach. ...While the consequent cross subsidization to CPs has nurtured content innovations at the edge of the Internet, it reduces the investment incentives for the access ISPs to expand capacity. As potential charges for terminating CPs' traffic are criticized under the net neutrality debate, we propose to allow CPs to voluntarily subsidize the usage-based fees induced by their content traffic for end-users. We model the regulated subsidization competition among the CPs under a neutral network and show how deregulation of subsidization could increase an access ISP's utilization and revenue, strengthening its investment incentives. Our results suggest that subsidization competition will increase the competitiveness and welfare of the Internet content market. However, regulators might need to: 1) regulate access prices if the access ISP market is not competitive enough; and 2) regulate subsidies if network is highly congested. We envision that subsidization competition could become a viable net-neutral model for the future Internet.
Internet service providers (ISPs) depend on one another to provide global network services. However, the profit-seeking nature of the ISPs leads to selfish behaviors that result in inefficiencies and ...disputes in the network. This concern is at the heart of the "network neutrality" debate, which also asks for an appropriate compensation structure that satisfies all types of ISPs. Our previous work showed in a general network model that the Shapley value has several desirable properties, and that if applied as the profit model, selfish ISPs would yield globally optimal routing and interconnecting decisions. In this paper, we use a more detailed and realistic network model with three classes of ISPs: content, transit, and eyeball. This additional detail enables us to delve much deeper into the implications of a Shapley settlement mechanism. We derive closed-form Shapley values for more structured ISP topologies and develop a dynamic programming procedure to compute the Shapley values under more diverse Internet topologies. We also identify the implications on the bilateral compensation between ISPs and the pricing structures for differentiated services. In practice, these results provide guidelines for solving disputes between ISPs and for establishing regulatory protocols for differentiated services and the industry.
Net neutrality has recently been heavily debated as a potential regulation of the Internet. This debate is centered around the argument whether the Internet Service Providers (ISPs) should be allowed ...to provide differentiated services over the Internet. Advocates of net neutrality have expressed concerns about the ISPs' pricing power, which might be used to discriminate Content Providers (CPs), and consequently destroy innovations at the edge of the Internet and hurt users' utilities. However, without service differentiation, ISPs do not have incentives to expand infrastructure capacities and provide quality of services, which will eventually impair the development of the future Internet. Although market competition among the ISPs would alleviate the problem and reduce the need for net neutrality regulations, the problem is more severe in monopolistic markets, e.g., rural access markets where natural monopolies exist due to high deployment costs and appropriate regulations are most in need. We study the service differentiation offered by a monopolistic ISP and find that the ISP's profit-optimal strategy makes a free ordinary service damaged good, which hurts the welfare of CPs and their users. Instead of imposing net neutrality regulations, we propose a more flexible and lenient policy framework that generalizes net neutrality regulations. We believe that by allowing ISPs to differentiate services under a well-designed policy constraint, the utility of the entire Internet ecosystem could be greatly improved.
The net neutrality debate has been centered on the question: should Internet service providers (ISPs) be allowed to differentiate services for Internet content traffic? The concern is that the ...differentiation imposed by selfish ISPs might discriminate content providers (CPs) and harm social welfare. Although market competition among ISPs would alleviate the problem and moderate the necessity for net neutrality regulations, the problem remains in monopolistic access markets. We focus on such a market and study paid prioritization where CPs voluntarily pay for prioritizing their traffic under shared capacity. We study an ISP's pricing strategy, CPs' choices of priority, and the resulting system equilibrium, based on which we derive the utility of the ISP and CPs as well as social welfare. This paper shows that: 1) an ISP's optimal pricing leads to an efficient differentiation among CPs, such that social welfare is close to its maximum; 2) although ISPs might inhibit capacity deployment in the short run, price regulation could solve this issue; and 3) under medium system scale and capacity cost, ISPs would have strong incentives to expand capacity under paid prioritization. From a welfare perspective, our results suggest that paid prioritization could be superior to the imposition of net neutrality regulations.
As the Internet continues to evolve, traditional peering agreements cannot accommodate the changing market conditions. Premium peering has emerged where access providers (APs) charge content ...providers (CPs) for premium services beyond best-effort connectivity. Although prioritized peering raises concerns about net neutrality, the U.S. FCC exempted peering agreements from its recent ruling, as it falls short of background in the Internet peering context. In this paper, we consider the premium peering options provided by APs and study whether CPs will choose to peer. Based on a novel choice model of complementary services, we characterize the market shares and utilities of the providers under various peering decisions and identify the value of premium peering for the CPs that fundamentally determine CPs' peering decisions. We find that high-value CPs have peer pressure when low-value CPs peer; however, low-value CPs behave oppositely. The peering decisions of the high-value and low-value CPs are substantially influenced by their baseline market shares and user stickiness, respectively, but not vice versa.
DRS: Auto-Scaling for Real-Time Stream Analytics Fu, Tom Z. J.; Ding, Jianbing; Ma, Richard T. B. ...
IEEE/ACM transactions on networking,
2017-Dec., 2017-12-00, 20171201, Letnik:
25, Številka:
6
Journal Article
Recenzirano
In a stream data analytics system, input data arrive continuously and trigger the processing and updating of analytics results. We focus on applications with real-time constraints, in which, any data ...unit must be completely processed within a given time duration. To handle fast data, it is common to place the stream data analytics system on top of a cloud infrastructure. Because stream properties, such as arrival rates can fluctuate unpredictably, cloud resources must be dynamically provisioned and scheduled accordingly to ensure real-time responses. It is essential, for existing systems or future developments, to possess the ability of scaling resources dynamically according to the instantaneous workload, in order to avoid wasting resources or failing in delivering the correct analytics results on time. Motivated by this, we propose DRS, a dynamic resource scaling framework for cloud-based stream data analytics systems. DRS overcomes three fundamental challenges: 1) how to model the relationship between the provisioned resources and the application performance, 2) where to best place resources, and 3) how to measure the system load with minimal overhead. In particular, DRS includes an accurate performance model based on the theory of Jackson open queueing networks and is capable of handling arbitrary operator topologies, possibly with loops, splits, and joins. Extensive experiments with real data show that DRS is capable of detecting sub-optimal resource allocation and making quick and effective resource adjustment.
Frequency control is essential to maintain the stability and reliability of power grids. For decades, generation side controllers, e.g., governors and automatic generation controllers, have been used ...to stabilize the frequency of power systems, which incur high operational costs. In smart grids, utilizing demand response is an appealing alternative to control the system frequency at the demand side, which can reduce the dependency of grids on expensive generation side controllers. Despite of economic advantages, the frequency oscillation problem, which occurs when smart appliances simultaneously respond to the system frequency by varying their power consumptions, is the main barrier to realize demand response enabled frequency control in practice. In this paper, we investigate a new distributed control algorithm by randomizing smart appliances' responses to solve this problem. We provide a comprehensive analysis to characterize various impacts of the randomized demand response on the system frequency in terms of its mean and variance over time. Furthermore, based on the frequency dynamics analysis, we determine the average frequency recovery time, the average number of responded smart appliances, and the probability of frequency overshoot, which provide important guidelines for designing our control algorithm. Finally, we validate our analysis via simulations under practical setups.