The importance of software maintenance in managing the life-cycle costs of a system cannot be overemphasized. Beyond a point, however, it is better to replace a system rather than maintain it. We ...derive model and operating policy that reduces the sum of maintenance and replacement costs in the useful life of a software system. The main goal is to compare uniform (occurring at fixed time intervals) versus flexible (occurring at varying, planned time intervals) polices for maintenance and replacement. The model draws from the empirical works of earlier researchers to consider 1) inclusion of user requests for maintenance, 2) scale economies in software maintenance, 3) efficiencies derived from replacing old software technology with new software technology, and 4) the impact of software reuse on replacement and maintenance. Results from our model show that the traditional practice of maintaining or replacing a software system at uniform time intervals may not be optimal. We also find that an increase in software reuse leads to more frequent replacement, but the number of maintenance activities is not significantly impacted.
The increasing popularity of the World Wide Web has made it an attractive medium for advertisers. As more advertisers place Internet advertisements (hereafter also called "ads"), it has become ...important for Web site owners to maximize revenue through the optimal selection and placement of these ads. Unlike most previous research, we consider a hybrid pricing model, where the price advertisers pay is a function of 1) the number of exposures of the ad and 2) the number of clicks on the ad. The problem is finding an ad schedule to maximize the Web site revenue under a hybrid pricing model. We formulate two versions of the problem - static and dynamic - and propose a variety of efficient solution techniques that provide near-optimal solutions. In the dynamic version, the schedule of ads is changed based on individual user click behavior. We show by using a theoretical proof under special circumstances and an experimental demonstration under general conditions that a schedule that adapts to the user click behavior consistently outperforms one that does not. We also demonstrate that to benefit from observing the user click behavior, the associated probability parameter need not be estimated accurately. For both of these versions, we examine the sensitivity of the revenue with respect to the model parameters.
In constructing a software system, extended periods of coding without adequate coordination (such as system integration and testing) can result in considerable fault correction effort. On the other ...hand, too much coordination can also prove counterproductive by disrupting the smooth flow of development work. The goal, therefore, is to find an optimal level of coordination so as to minimize system construction effort while adhering to functionality and schedule constraints. Previous research, however, has not considered dynamic project factors such as system growth, system stability and team learning when addressing the above coordination problem. Dynamic factors are important because they could lead to differences in the intensity (frequency) of coordination needed at different stages of system construction. Unlike existing studies, we propose a dynamic coordination policy that places coordination activities at optimal (and often nonuniform) intervals during the construction of a system. Our analysis shows that, if a system stabilizes slowly, more intense coordination should occur early in the project. Also, if the team's knowledge of the system improves with time (i.e. learning effects are present), more intense coordination should occur both near the beginning and near the end of the project. Our analysis also shows that, by encouraging more frequent coordination, superior development tools could facilitate team learning. Finally, the application of the coordination model to data from a NASA software project demonstrates that optimally coordinating a project could significantly reduce the system construction cost.
We study the presence of economic bias in the training data used to develop inductive expert systems. Such bias arises when an expert considers economic factors in decision making. We find that the ...presence of economic bias is particularly harmful when there is an economic misalignment between the expert and the user of the induced expert system. Such misalignment is referred to as differential bias. The most significant contribution of this study is a training data debiasing procedure that uses a genetic algorithm to reconstruct training data that is relatively free of economic bias. We conduct a series of simulation experiments that show: the economic performance of accuracy and value seeking algorithms is statistically the same when the training data has economic bias; both accuracy and value seeking algorithms suffer in the presence of differential bias; the proposed debiasing procedure significantly combats differential bias; and the debiasing procedure is quite robust with respect to estimation errors in its input parameters.
This study examines coordination issues that occur in allocating spending between advertising and information technology (IT) in electronic retailing. Electronic retailers run the risk of ...overspending on advertising to attract customers but underspending on IT, thus resulting in inadequate processing capacity at the firms website. In this paper, we present a centralized, joint marketing-IT model to optimally allocate spending between advertising and IT, and we discuss an uncoordinated case where marketing and IT make suboptimal advertising and capacity decisions. We show how these decisions can be coordinated either by reducing the value of a customer session or by designing an optimal processing contract between marketing and IT. Both the coordination methods can be implemented with only local knowledge of the IT function, yet they generate a solution that almost matches the quality of the centralized solution. We extend our basic model to consider demand uncertainty, lagged advertising effects, and uncertainties in the lead time to acquire IT capacity. With demand uncertainty, electronic retailers should reduce spending on advertising and increase IT capacity if there is potential for a demand upswing and the cost of IT capacity is relatively low. The value of a customer session should be further reduced when uncertainties exist. This is required to share the risk of excess or inadequate IT capacity.
We derive bounds on the probability of a goal node given a set of acquired input nodes. The bounds apply to decomposable networks; a class of Bayesian networks encompassing causal trees and causal ...polytrees. The difficulty of computing the bounds depends on the characteristics of the decomposable network. For directly connected networks with binary goal nodes, tight bounds can be computed in polynomial time. For other kinds of decomposable networks, the derivation of the bounds requires solving an integer program with a nonlinear objective function, a computationally intractable problem in the worst case. We provide a relaxation technique that computes looser bounds in polynomial time for more complex decomposable networks. We briefly describe an application of the probability bounds to a record linkage problem.
Combination therapy with the BRAF inhibitor dabrafenib plus the MEK inhibitor trametinib improved survival in patients with advanced melanoma with BRAF V600 mutations. We sought to determine whether ...adjuvant dabrafenib plus trametinib would improve outcomes in patients with resected, stage III melanoma with BRAF V600 mutations.
In this double-blind, placebo-controlled, phase 3 trial, we randomly assigned 870 patients with completely resected, stage III melanoma with BRAF V600E or V600K mutations to receive oral dabrafenib at a dose of 150 mg twice daily plus trametinib at a dose of 2 mg once daily (combination therapy, 438 patients) or two matched placebo tablets (432 patients) for 12 months. The primary end point was relapse-free survival. Secondary end points included overall survival, distant metastasis-free survival, freedom from relapse, and safety.
At a median follow-up of 2.8 years, the estimated 3-year rate of relapse-free survival was 58% in the combination-therapy group and 39% in the placebo group (hazard ratio for relapse or death, 0.47; 95% confidence interval CI, 0.39 to 0.58; P<0.001). The 3-year overall survival rate was 86% in the combination-therapy group and 77% in the placebo group (hazard ratio for death, 0.57; 95% CI, 0.42 to 0.79; P=0.0006), but this level of improvement did not cross the prespecified interim analysis boundary of P=0.000019. Rates of distant metastasis-free survival and freedom from relapse were also higher in the combination-therapy group than in the placebo group. The safety profile of dabrafenib plus trametinib was consistent with that observed with the combination in patients with metastatic melanoma.
Adjuvant use of combination therapy with dabrafenib plus trametinib resulted in a significantly lower risk of recurrence in patients with stage III melanoma with BRAF V600E or V600K mutations than the adjuvant use of placebo and was not associated with new toxic effects. (Funded by GlaxoSmithKline and Novartis; COMBI-AD ClinicalTrials.gov, NCT01682083 ; EudraCT number, 2012-001266-15 .).
Sequential decision models are an important element of expert system optimization when the cost or time to collect inputs is significant and inputs are not known until the system operates. Many ...expert systems in business, engineering, and medicine have benefited from sequential decision technology. In this survey, we unify the disparate literature on sequential decision models to improve comprehensibility and accessibility. We separate formulation of sequential decision models from solution techniques. For model formulation, we classify sequential decision models by objective (cost minimization versus value maximization) knowledge source (rules, data, belief network, etc.), and optimized form (decision tree, path, input order). A wide variety of sequential decision models are discussed in this taxonomy. For solution techniques, we demonstrate how search methods and heuristics are influenced by economic objective, knowledge source, and optimized form. We discuss open research problems to stimulate additional research and development.
Ambulatory blood pressure monitoring provides a more reliable assessment of actual BP than office BP and is a more sensitive risk predictor of clinical cardiovascular outcomes. Recent international ...guidelines for hypertension have emphasised the usefulness of ambulatory BP for diagnosis and management of hypertension. We used ambulatory blood pressure monitoring to monitor the effect of the pharmacological treatment in patients with stage 1 or 2 hypertension. This was a multicentric randomised controlled trial having 360 subjects with 180 in each treatment arm. The duration of study was 6 months. The patients were randomly selected to receive atenolol or losartan as initial therapy. The dose of atenolol or losartan was 50 mg once daily at 8 am in the morning. Ambulatory BP assessment was done in a subgroup of subjects using Schiller BR-102 plus machine. One hundred and thirty patients were recruited for the study using ambulatory blood pressure monitoring. There were 66 patients in atenolol arm and 64 patients in the losartan arm. A significant white coat hypertension was noticed in both the arms. Out of 130 subjects in the ambulatory group, 41.53% had a white coat hypertension. Statistically significant reduction of office BP was observed with both atenolol and losartan; however, no significant difference in efficacy of the two drugs was found in reducing office BP. However, when using ambulatory blood pressure monitoring, the reduction with either drug was not significant. The dipper status was better in the atenolol group than the losartan group. Neither of the drugs prevent morning surge of BP when administered once daily in the morning. There was high prevalence of white coat hypertension in patients with stage 1 and stage 2 hypertension. There was similar reduction of systolic blood pressure and diastolic blood pressure by the 2 study drugs. Atenolol scores over losartan in converting non-dipper to dipper but its' impact on clinical outcome is not known. Morning surge of BP was unaffected by either of the study drugs.
We study the problem of optimally allocating effort between software construction and debugging. As construction proceeds, new errors are introduced into the system. The objective is to deliver a ...system of the highest possible quality (fewest number of errors) subject to the constraint that N system modules are constructed in a specified duration T . If errors are not corrected during construction, then further construction can produce errors at a faster rate. To curb the growth of errors, some of the effort must be taken away from construction and assigned to testing and debugging. A key finding of this model is that the practice of alternating between pure construction and pure debugging is suboptimal. Instead, it is desirable to concurrently construct and debug the system. We extend the above model to integrate decisions traditionally considered "external" such as the time to release the product to the market with those that are typically treated as "internal" such as the division of effort between construction and debugging. Results show that integrating these decisions can yield significant reduction in the overall cost. Also, when competitive forces are strong, it may be better to release a product early (with more errors) than late (with fewer errors). Thus, underestimating the cost of errors in the product may be better than overestimating the cost.