Pool-Point Distribution of Zero-Inventory Products Geismar, H. Neil; Dawande, Milind; Sriskandarajah, Chelliah
Production and operations management,
September/October 2011, Letnik:
20, Številka:
5
Journal Article
Recenzirano
We study zero‐inventory production‐distribution systems under pool‐point delivery. The zero‐inventory production and distribution paradigm is supported in a variety of industries in which a product ...cannot be inventoried because of its short shelf life. The advantages of pool‐point (or hub‐and‐spoke) distribution, explored extensively in the literature, include the efficient use of transportation resources and effective day‐to‐day management of operations.
The setting of our analysis is as follows: A production facility (plant) with a finite production rate distributes its single product, which cannot be inventoried, to several pool points. Each pool point may require multiple truckloads to satisfy its customers' demand. A third‐party logistics provider then transports the product to individual customers surrounding each pool point. The production rate can be increased up to a certain limit by incurring additional cost. The delivery of the product is done by identical trucks, each having limited capacity and non‐negligible traveling time between the plant and the pool points. Our objective is to coordinate the production and transportation operations so that the total cost of production and distribution is minimized, while respecting the product lifetime and the delivery capacity constraints.
This study attempts to develop intuition into zero‐inventory production‐distribution systems under pool‐point delivery by considering several variants of the above setting. These include multiple trucks, a modifiable production rate, and alternative objectives. Using a combination of theoretical analysis and computational experiments, we gain insights into optimizing the total cost of a production‐delivery plan by understanding the trade‐off between production and transportation.
The effective utilization of capacity is an important operational goal that managers strive to achieve. Most textbooks use the following simple “bottleneck formula” to illustrate the calculation of ...process capacity: the capacity of each resource is first calculated by examining that resource in isolation; process capacity is then taken as the smallest (bottleneck) among the resource capacities. The bottleneck formula is, in fact, an approximation of the true process capacity and correctly calculates capacity only in some straightforward settings, for example, in processes where each activity requires only one resource and in processes where each resource is dedicated to only one activity. However, when activities require multiple resources simultaneously (collaboration) and when resources are capable of doing multiple activities (multitasking), the simple formula can be significantly inaccurate. Further, several commonly held managerial insights related to process capacity and least-capacity resources that emerge from the formula can be misleading. The main goal of this case is to alert students that, for processes with collaboration and multitasking, the use of the bottleneck formula brings the potential danger of reaching incorrect conclusions about capacity and what constitutes a bottleneck of a process and may eventually lead to erroneous decisions with significant financial impact, for example, investing in procuring an expensive resource without being able to realize the presumed increase in capacity. More generally, the case illustrates the principles of process capacity and bottleneck structures and clarifies some often-repeated misunderstandings on the relationship between process capacity and least-capacity resources. The case also illustrates the importance of using Gantt charts for conveniently displaying schedules of activities.
Based on our work with ConAgra Foods (http://www.conagrafoods.com), a leading U.S. food manufacturer, we study a large-scale production-planning problem. The problem incorporates several ...distinguishing characteristics of production in the processed-food industry, including (i) production patterns that define specific combinations of weeks in which products can be produced, (ii) food groups that classify products based on the allergens they contain, (iii) sequence-dependent setup times, and (iv) manufacture of a large number of products (typically, around 200—250) on multiple production lines (typically, around 15—20) in the presence of significant inventory holding costs and production setup costs. The objective is to obtain a minimum-cost four-week cyclic schedule to resolve three basic decisions: (a) the assignment of products to each line, (b) the partitioning of the demand of each product over the lines to which it is assigned, and (c) the sequence of production on each line. We show that the general problem is strongly NP-hard. To develop intuition via theoretical analysis, we first obtain a polynomially solvable special case by sacrificing as little of its structure as possible and then analyzing the impact of imposing production patterns. A mixed-integer programming model of the general problem allows us to assess the average impact of production patterns and production capacities on the cost of an optimal schedule. Next, to solve practical instances of the problem, we develop an easy-to-implement heuristic. We first demonstrate the effectiveness of the heuristic on a comprehensive test bed of instances; the average percentage gap of the heuristic solution from the optimum is about 3%. Then, we show savings of about 28% on a real-world instance (283 products, 17 production lines) by comparing the schedule obtained from the heuristic to one that was in use (at ConAgra) based on an earlier consultant's work. Finally, we discuss the IT infrastructure implemented to enable the incorporation of optimized (or near-optimized) solutions for ongoing use.
Access control mechanisms in software systems administer user privileges by granting users permission to perform certain operations while denying unauthorized access to others. Such mechanisms are ...essential to ensure that important business functions in an organization are conducted securely and smoothly. Currently, the dominant access control approach in most major software systems is role-based access control. In this approach, permissions are first assigned to roles, and users acquire permissions by becoming members of certain roles. However, given the dynamic nature of organizations, a fixed set of roles usually cannot meet the demands that users (existing or new) have to conduct business.
The typical response to this problem is to myopically create new roles to meet immediate demand that cannot be satisfied by an existing set of roles. This ad hoc creation of roles invariably leads to a proliferation in the number of roles with the accompanying administrative overhead. Based on discussions with practitioners, we propose a role refinement scheme that reconstructs a system of roles to reduce the cost of role management. We first show that the role-refinement problem is strongly NP-hard and then provide two polynomial-time approximation algorithms (a greedy algorithm and a randomized rounding algorithm) and establish their performance guarantees. Finally, numerical experiments-based on a real data set from a firm's enterprise resource planning system-are conducted to demonstrate the applicability and performance of our refinement scheme.
A dedicated subnetwork (DSN) refers to a subset of lanes, with associated loads, in a shipper's transportation network, for which resources—trucks, drivers, and other equipment—are exclusively ...assigned to accomplish shipping requirements. The resources assigned to a DSN are not shared with the rest of the shipper's network. Thus, a DSN is an autonomously operated subnetwork and, hence, can be subcontracted. We address a novel problem of extracting a DSN for outsourcing to one or more subcontractors, with the objective of maximizing the shipper's savings. In their pure form, the defining conditions of a DSN are often too restrictive to enable the extraction of a sizable subnetwork. We consider two notions—deadheading and lane‐sharing—that aid in improving the size of the DSN. We show that all the optimization problems involved are both strongly NP‐hard and APX‐hard, and demonstrate several polynomially solvable special cases arising from topological properties of the network and parametric relationships. Next, we develop a network‐flow‐based heuristic that provides near‐optimal solutions to practical instances in reasonable time. Finally, using a test bed based on data obtained from a national 3PL company, we demonstrate the substantial monetary impact of subcontracting a DSN and offer useful managerial insights.
The overuse of its currency processing operations by depository institutions (DIs) has motivated the Federal Reserve (Fed) to propose new currency recirculation guidelines. The Fed believes that DIs ...should play a more active role in recirculating fit (i.e., usable) currency so that the societal cost of providing currency to the public is minimized. The Fed characterizes the overuse by the extent of cross shipping, a practice in which the same DI deposits and withdraws currency of the same denomination within five business days in the same geographic region. The Fed's proposal encourages DIs to fit sort and reuse deposited currency through two components: a custodial inventory program and a recirculation fee that would be charged on withdrawals of cross‐shipped currency. Given the geographical network of the various branches of a DI, the extent of its participation in the proposed programs depends on a variety of factors: the nature of demand and supply of currency, number and locations of the processing centers, and the resulting fit‐sorting, holding, and transportation costs. The interrelated nature of these decisions motivates the need for an integrated model that captures the flow of currency in the entire network of the DI. Based on our work with Brink's Inc., a leading secure‐logistics provider, we develop a mixed‐integer linear programming (MILP) model to provide managers of DIs with a decision‐making tool under the Fed's new guidelines. Broadly, we analyze the following questions: (i) Over all typical practical realizations of the demand for currency that a DI may face, and over all reasonable cost implications, is there a menu of “good” operating policies? (ii) What is the monetary impact of fit‐sorting and custodial inventories on a DI? and (iii) To what extent will the Fed's new guidelines address its main goal, namely, a reduction in the practice of cross shipping by encouraging DIs to recirculate currency?
The overuse of its currency processing facilities by depository institutions (DIs) has motivated the Federal Reserve (Fed) to impose its new cash recirculation policy. This overuse is characterized ...by the practice of cross-shipping, where a DI both deposits and withdraws cash of the same denomination in the same business week in the same geographical area. Under the new policy, which came into effect July 2007, the Fed has imposed a recirculation fee on cross-shipped cash. The Fed intends to use this fee to induce DIs to effectively recirculate cash so that the societal cost of providing cash to the public is lowered. To examine the efficacy of this mechanism, we first characterize the social optimum and then analyze the response of DIs under a recirculation fee levied on cross-shipped cash. We show that neither a linear recirculation fee, which is the Fed's current practice, nor a more sophisticated nonlinear fee is sufficient to guarantee a socially optimal response from DIs. We then derive a fundamentally different mechanism that induces DIs to self-select the social optimum. Our mechanism incorporates a fairness adjustment that avoids penalizing DIs that recirculate their fair share of cash and rewards DIs that recirculate more than this amount. We demonstrate that the mechanism is easy to implement and tolerates a reasonable amount of imprecision in the problem parameters. We also discuss a concept of welfare-preserving redistribution wherein the Fed allows a group of DIs to reallocate (amongst themselves) their deposits and demand if such a possibility does not increase societal cost. Finally, we analyze the impact of incorporating the custodial inventory program, another component of the Fed's new policy.
The increasing popularity of the World Wide Web has made it an attractive medium for advertisers. As more advertisers place Internet advertisements (hereafter also called "ads"), it has become ...important for Web site owners to maximize revenue through the optimal selection and placement of these ads. Unlike most previous research, we consider a hybrid pricing model, where the price advertisers pay is a function of 1) the number of exposures of the ad and 2) the number of clicks on the ad. The problem is finding an ad schedule to maximize the Web site revenue under a hybrid pricing model. We formulate two versions of the problem - static and dynamic - and propose a variety of efficient solution techniques that provide near-optimal solutions. In the dynamic version, the schedule of ads is changed based on individual user click behavior. We show by using a theoretical proof under special circumstances and an experimental demonstration under general conditions that a schedule that adapts to the user click behavior consistently outperforms one that does not. We also demonstrate that to benefit from observing the user click behavior, the associated probability parameter need not be estimated accurately. For both of these versions, we examine the sensitivity of the revenue with respect to the model parameters.