Beginning in March 2020, the United States emerged as the global epicenter for COVID-19 cases with little to guide policy response in the absence of extensive data available for reliable ...epidemiological modeling in the early phases of the pandemic. In the ensuing weeks, American jurisdictions attempted to manage disease spread on a regional basis using non-pharmaceutical interventions (i.e., social distancing), as uneven disease burden across the expansive geography of the United States exerted different implications for policy management in different regions. While Arizona policymakers relied initially on state-by-state national modeling projections from different groups outside of the state, we sought to create a state-specific model using a mathematical framework that ties disease surveillance with the future burden on Arizona's healthcare system. Our framework uses a compartmental system dynamics model using a SEIRD framework that accounts for multiple types of disease manifestations for the COVID-19 infection, as well as the observed time delay in epidemiological findings following public policy enactments. We use a compartment initialization logic coupled with a fitting technique to construct projections for key metrics to guide public health policy, including exposures, infections, hospitalizations, and deaths under a variety of social reopening scenarios. Our approach makes use of X-factor fitting and backcasting methods to construct meaningful and reliable models with minimal available data in order to provide timely policy guidance in the early phases of a pandemic.
Next generation sequencing tests (NGS) are usually performed on relatively small core biopsy or fine needle aspiration (FNA) samples. Data is limited on what amount of tumor by volume or minimum ...number of FNA passes are needed to yield sufficient material for running NGS. We sought to identify the amount of tumor for running the PCDx NGS platform.
2,723 consecutive tumor tissues of all cancer types were queried and reviewed for inclusion. Information on tumor volume, success of performing NGS, and results of NGS were compiled. Assessment of sequence analysis, mutation calling and sensitivity, quality control, drug associations, and data aggregation and analysis were performed.
6.4% of samples were rejected from all testing due to insufficient tumor quantity. The number of genes with insufficient sensitivity make definitive mutation calls increased as the percentage of tumor decreased, reaching statistical significance below 5% tumor content. The number of drug associations also decreased with a lower percentage of tumor, but this difference only became significant between 1-3%. The number of drug associations did decrease with smaller tissue size as expected. Neither specimen size or percentage of tumor affected the ability to pass mRNA quality control. A tumor area of 10 mm2 provides a good margin of error for specimens to yield adequate drug association results.
Specimen suitability remains a major obstacle to clinical NGS testing. We determined that PCR-based library creation methods allow the use of smaller specimens, and those with a lower percentage of tumor cells to be run on the PCDx NGS platform.
We consider the real-life problem of a coach bus manufacturer located in Turkey, facing the problem of setting ordering quantities for a part procured from an unreliable supplier, where the number of ...items delivered is binomially distributed with an unknown yield parameter, p. We use the well-defined finite-horizon planning context with deterministic demand per period, purchasing, holding, and shortage costs to investigate the effectiveness of a fill-rate based approximate learning scheme in comparison to an exact Bayesian learning scheme, where observations on the supplier's delivery performance are used to update the assumed distribution of p. We formulate the exact optimal learning problem as a Bayes-adaptive Markov decision process and solve the corresponding finite horizon stochastic dynamic program to provide insights on the value of online learning in comparison to the unrealistic perfect information (PI) and no information (NT) benchmarks. We contrast the performance of the so-called Bayesian Updating (BU) policy to other practical approaches such as using an assumed/guessed value of p and implementing a constant safety stock. Noting the significant value of learning, we finally study the effectiveness of an approximate learning formulation that does not enjoy the asymptotic consistency and convergence properties but involves much lower computational burden, and demonstrate its confounding performance, at times beating the BU policy with exact Bayesian updates.
This study considers decisions in workforce management assuming individual workers are inherently different as measured by general cognitive ability (GCA). A mixed integer programming (MIP) model ...that determines different staffing decisions (i.e., hire, cross-train, and fire) in order to minimize workforce related costs over multiple periods is described. Solving the MIP for a large problem instance size is computationally burdensome. In this paper, two linear programming (LP) based heuristics and a solution space partition approach are presented to reduce the computational time. A genetic algorithm was also implemented as an alternative method to obtain better solutions and for comparison to the heuristics proposed. The heuristics were applied to realistic manufacturing systems with a large number of machine groups. Experimental results shows that performance of the LP based heuristics performance are surprisingly good and indicate that the heuristics can solve large problem instances effectively with reasonable computational effort.
Project portfolio selection (PPS) is a complex problem faced by major companies whenever there are multiple funding opportunities with insufficient budget to fund them all. In this paper, we present ...our work on a PPS decision support tool that has become a fundamental part of the project portfolio decision process at Intel Corporation across its largest product and market divisions. The paper builds on a previous publication that outlines the decision support tool's bicriteria optimization model by providing a solution procedure that is capable of solving real‐life PPS problems within time frames acceptable to decision makers, as well as providing further details on the data collection and the decision‐making process. We also report on various analysis and visualization tools that have been built to allow decision makers to interact with promising solutions provided by the decision support tool. One of the contributions of the paper is to present a typology of the important dependencies between projects that needs to be considered, and provide details on how they are incorporated in the decision support tool's optimization engine. We discuss important implementation details on the decision‐making process and the agents involved, and conclude by describing real‐life experiences on how the framework can enable intuitive decision‐making when choosing portfolios that best satisfy the organization's business goals.
We consider the problem of estimating the resulting utilization and cycle times in manufacturing settings that are subject to significant capacity losses due to setups when switching between ...different product or part types. In particular, we develop queuing approximations for a multi-item server with sequence-dependent setups operating under four distinct setup rules that we have determined to be common in such settings: first-in-first-out, setup avoidance, setup minimization and type priority. We first derive expressions for the setup utilization and overall utilization, and use Kingman's well-known approximation to estimate the average cycle time at the station under each setup rule. We test the accuracy of the approximations using a simulation experiment, and provide insights on the use of different setup rules under various conditions.
To estimate population health outcomes with delayed second dose versus standard schedule of SARS-CoV-2 mRNA vaccination.
Simulation agent based modeling study.
Simulated population based on real ...world US county.
The simulation included 100 000 agents, with a representative distribution of demographics and occupations. Networks of contacts were established to simulate potentially infectious interactions though occupation, household, and random interactions.
Simulation of standard covid-19 vaccination versus delayed second dose vaccination prioritizing the first dose. The simulation runs were replicated 10 times. Sensitivity analyses included first dose vaccine efficacy of 50%, 60%, 70%, 80%, and 90% after day 12 post-vaccination; vaccination rate of 0.1%, 0.3%, and 1% of population per day; assuming the vaccine prevents only symptoms but not asymptomatic spread (that is, non-sterilizing vaccine); and an alternative vaccination strategy that implements delayed second dose for people under 65 years of age, but not until all those above this age have been vaccinated.
Cumulative covid-19 mortality, cumulative SARS-CoV-2 infections, and cumulative hospital admissions due to covid-19 over 180 days.
Over all simulation replications, the median cumulative mortality per 100 000 for standard dosing versus delayed second dose was 226
179, 233
207, and 235
236 for 90%, 80%, and 70% first dose efficacy, respectively. The delayed second dose strategy was optimal for vaccine efficacies at or above 80% and vaccination rates at or below 0.3% of the population per day, under both sterilizing and non-sterilizing vaccine assumptions, resulting in absolute cumulative mortality reductions between 26 and 47 per 100 000. The delayed second dose strategy for people under 65 performed consistently well under all vaccination rates tested.
A delayed second dose vaccination strategy, at least for people aged under 65, could result in reduced cumulative mortality under certain conditions.
We present a novel framework to characterize the probability that an offered appointment with k‐days' access delay will be booked and subsequently attended by a patient. We refer to this probability ...as the “probability of realization” of an offered appointment, and demonstrate how empirical characterizations of this measure can be used to identify improved policies for managing patient demand and enabling the most intentional use of clinical care resources. We consider the estimation problem in the context of new patients looking to establish care at a clinic, and offer a model of patient responses during an appointment scheduling encounter with an agent. We define different cases of data availability and demonstrate the effectiveness of the framework in each case to ensure generalizability. We first demonstrate the accuracy of estimations using simulated data, and then highlight behavioral differences between different patient groups using real‐life transactional data from two clinical departments. Finally, we demonstrate a practical use case for the obtained realization probabilities by showing that they can be reliably used as input to a novel time windows‐based patient prioritization protocol that allows effective management of demand from different classes by explicitly considering wait sensitivities of patients.
We consider the intake process of new low back pain (LBP) patients at a neurosurgery clinic to manage patient demand for improving access delays through personalized routing strategies rather than ...increasing care capacity. Using clinical notes from the first appointments with providers, we devise a decision-tree based intelligent teletriage tool that can be used by non-medically trained agents to predict the surgical class of a patient calling in to request an appointment. The intelligent teletriage tool is based on a classifier that uses surgical-nonsurgical labels that we have generated using a structured algorithm and features that are easy to obtain directly from the patient during the course of a phone conversation. We establish that the accuracy of the teletriage tool is in the order of 80% using 10-fold cross validation and out-of-sample testing on real-life data sets. We then present three priority-based routing strategies that are neutral with respect to care capacity, and show that when used in combination with the intelligent triage tool, these can result in 90% reduction in access delays for the higher priority surgical patients who should be seen urgently. We use detailed simulations of the appointment scheduling workflow to demonstrate our results. We comment on the managerial implications of our work and the potential for the use of needs-based personalized routing strategies with intelligent teletriage to reduce access delays, improve patient outcomes and provider satisfaction.