We study a supply planning problem in a manufacturing system with two stages. The first stage is a remanufacturer that supplies two closely-related components to the second (manufacturing) stage, ...which uses each component as the basis for its respective product. The used products are recovered from the market by a third-party logistic provider through an established reverse logistics network. The remanufacturer may satisfy the manufacturer’s demand either by purchasing new components or by remanufacturing components recovered from the returned used products. The remanufacturer’s costs arise from product recovery, remanufacturing components, purchasing original components, holding inventories of recovered products and remanufactured components, production setups (at the first stage and at each component changeover), disposal of recovered products that are not remanufactured, and coordinating the supply modes. The objective is to develop optimal production plans for different production strategies. These strategies are differentiated by whether inventories of recovered products or remanufactured components are carried, and by whether the order in which retailers are served during the planning horizon may be resequenced. We devise production policies that minimize the total cost at the remanufacturer by specifying the quantity of components to be remanufactured, the quantity of new components to be purchased from suppliers, and the quantity of recovered used products that must be disposed. The effects of production capacity are also explored. A comprehensive computational study provides insights into this closed-loop supply chain for those strategies that are shown to be NP-hard.
“Cloud Kitchens” are delivery-only facilities that house multiple restaurants. Food-delivery platforms operate such kitchens to exploit two advantages: (a) Location advantage, arising due to a ...cloud-kitchen’s central location—this enables lower delivery times to customers. (b) Consolidation advantage, which accrues when multiple restaurants choose to co-locate at the cloud kitchen—this enables the platform to use a common pool of delivery drivers, thereby reducing costs. However, a cloud-kitchen’s eventual impacts on both the restaurants and the platform are intricately connected through their respective decisions—namely, the restaurants’ location decisions and the platform’s delivery capacity and delivery time. We examine conditions under which a cloud kitchen simultaneously benefits the primary stakeholders: delivery platform, restaurants, and customers. Our game-theoretic analysis considers two restaurants and a delivery platform. The restaurants simultaneously decide whether to stay at their initial (extreme) locations or relocate to a centrally located cloud kitchen. The platform decides the driver headcount and the delivery times for customers. In line with industry trends, we show that as population density increases beyond a threshold, the restaurants co-locating at the cloud kitchen is first a Pareto-dominant equilibrium and then the unique equilibrium. The platform and customers also prefer this equilibrium, leading to a win-win-win for the stakeholders. A cloud-kitchen’s benefit to the platform further increases as the drivers’ operational environment becomes more constrained, i.e. drivers’ carry-limit and speed decrease, and driver cost increases.
The economics of process transparency Guda, Harish; Dawande, Milind; Janakiraman, Ganesh
Production and operations management,
June 2023, Letnik:
32, Številka:
6
Journal Article
Recenzirano
We propose and analyze a novel framework to understand the role of noninstrumental information sharing in service operations management, that is, information shared by the firm not to affect ...consumers' actions, but to better manage their experience in the firm's process. The operations of the firm are organized as a process, consisting of a sequence of tasks, each of random duration. The firm shares real‐time information with the consumer about the progress of their flow unit in the firm's process via a process tracker. The consumer is delay‐sensitive and experiences gain–loss utility (loss aversion and diminishing sensitivity) over time due to changes in beliefs about anticipated delay, as he awaits completion of the process. We analyze when providing such real‐time progress information via process trackers help, or can possibly hurt a consumer. Our work draws upon the recent literature on belief‐based/news utility in Economics. We find that in the presence of loss aversion alone, not sharing progress information is beneficial. In the presence of loss aversion and diminishing sensitivity, if low delays are likely, then sharing information is beneficial; otherwise, not sharing information is preferred. Our findings inform a service firm's post‐sales transparency strategy.
Optimal cardinal contests Takasi, Goutham; Dawande, Milind; Janakiraman, Ganesh
Production and operations management,
11/2023, Letnik:
32, Številka:
11
Journal Article
Recenzirano
Odprti dostop
We study the design of crowdsourcing contests in settings where the outputs of the contestants are quantifiable, for example, a data science challenge. This setting is in contrast to those where the ...output is only qualitative and cannot be objectively quantified, for example, when the goal of the contest is to design a logo. The literature on crowdsourcing contests focuses largely on ordinal contests, where contestants' outputs are ranked by the designer and awards are based on relative ranks. Such contests are ideally suited for the latter setting, where output is qualitative. For our setting (quantitative output), it is possible to design cardinal contests, where awards could be based on the actual outputs and not on their ranking alone—thus, the family of cardinal contests includes the family of ordinal contests. We study the problem of designing an optimal cardinal contest. We use mechanism design theory to derive an optimal cardinal mechanism and provide a convenient implementation—a decreasing reward‐meter mechanism—of the optimal contest. We establish the practicality of our mechanism by showing that it is “Obviously Strategy‐Proof,” a recently introduced formal notion of simplicity in the literature. We also compare the optimal cardinal contest with the most popular ordinal contest—namely, the Winner‐Takes‐All (WTA) contest, along several metrics. In particular, the optimal cardinal mechanism delivers a superior expected best output, whereas the WTA contest yields a greater expected contestant welfare. Furthermore, under a sufficiently large budget, the contest designer's expected net‐benefit is higher under the optimal cardinal mechanism than that under the WTA contest, regardless of the number of contestants in the two mechanisms. Our numerical analysis suggests that, for the contest designer, the average improvement provided by the optimal cardinal mechanism over the WTA contest is about 23%. For a given number of contestants, the benefit of the optimal cardinal mechanism is especially appreciable for projects where the ratio of the designer's utility to agents' cost‐of‐effort falls within a wide practical range. For projects where this ratio is very high, the expected profit of the best WTA contest is reasonably close to that of the optimal cardinal mechanism.
Emerging technologies such as drone delivery services enable retailers to cost‐effectively offer unprecedented delivery speed and adaptable delivery lead times using dedicated aerial vehicles for ...individual orders. A natural and important question arises: What is the impact of a drone delivery system (DDS) on a retailer’s extant logistics parameters, for example, the number of customer‐facing delivery centers (last‐mile warehouses) it uses and delivery lead times it offers? On the one hand, the ability to reach customers faster than through traditional means argues for more centralization of delivery services. On the other hand, more decentralization can allow the retailer to offer hitherto unheard‐of delivery lead times and thereby spur demand. We show that, as drone technology matures and becomes more cost‐effective, delivery networks will become increasingly decentralized while delivering products at faster speeds. While perfect delivery customization—under which each demand location is offered a customized delivery guarantee—is theoretically feasible under a DDS, it may not be practical to implement such a finely differentiated delivery strategy. Instead, we show that retailers can recover a significant portion of the profit under this ideal scenario by offering limited delivery‐time customization, that is, partitioning the market into a few delivery “zones” and offering the best feasible delivery guarantee for each zone. In physically congested metropolitan markets, where retailers may be forced to operate with only a few delivery centers, it may be optimal to operate a DDS by offering delivery guarantees that are inferior to the best possible in order to throttle unprofitable demand. In such markets, the effectiveness of limited delivery‐time customization increases as the extent of physical congestion increases.
The promise of consumer data along with advances in information technology has spurred innovation not only in the way firms conduct their business operations but also in the manner in which data are ...collected. A prominent institutional structure that has recently emerged is a data cooperative—an organization that collects data from its members, and processes and monetizes the pooled data. A characteristic of consumer data is the externality it generates: Data shared by an individual reveal information about other similar individuals; thus, the marginal value of pooled data increases in both the quantity and quality of the data. A key challenge faced by a data cooperative is the design of a revenue‐allocation scheme for sharing revenue with its members. An effective scheme generates a beneficial cycle: It incentivizes members to share high‐quality data, which in turn results in high‐quality pooled data—this increases the attractiveness of the data for buyers and hence the cooperative's revenue, ultimately resulting in improved compensation for the members. While the cooperative naturally wishes to maximize its total surplus, two other important desirable properties of an allocation scheme are individual rationality and coalitional stability. We first examine a natural proportional allocation scheme—which pays members based on their individual contribution—and show that it simultaneously achieves individual rationality, the first‐best outcome, and coalitional stability, when members' privacy costs are homogeneous. Under heterogeneity in privacy costs, we analyze a novel hybrid allocation scheme and show that it achieves both individual rationality and the first‐best outcome, but may not satisfy coalitional stability. Finally, our RobinHood allocation scheme—which uses a fraction of the revenue to ensure coalitional stability and allocates the remaining based on the hybrid scheme—achieves all the desirable properties.
The goals of the guaranteed support price (GSP) scheme, adopted by several developing countries to support their farmers and the underprivileged population, are threefold: (a) as a supply‐side ...incentive, to ensure high output from the farmers, (b) as a demand‐side provisioning tool, to subsidize the consumption needs of the poor, and (c) to maintain an adequate amount of foodgrains as reserve stock, to mitigate the adverse effects of yield uncertainty (food security). We offer analytically supported insights on the fundamental aspects of this scheme by analyzing a Stackelberg game between a homogenous population of small farmers and a social planner. We model the strategic behavior of the farmers and the consuming population (Above‐ and Below‐Poverty‐Line consumers), and compare the equilibrium outcome with that under the direct benefit transfer (DBT) scheme, where the social planner simply distributes the budget among the BPL consumers. The comparison of the social planner's surplus depends on the marginal value from maintaining a reserve stock (i.e., the significance of food security). If this value is high, then the surplus under the GSP scheme strictly dominates that under DBT; otherwise, the surplus is identical. The comparison of the production by the farmers depends on two economic forces—the poorness of the BPL consumers and yield uncertainty. If the poorness is extreme, then the two schemes lead to identical production. If yield uncertainty is dominant, then DBT is ineffective in improving production while the GSP scheme can induce a strictly higher production by strategically choosing the reserve stock.