Abstract There is undisputed evidence that personalized medicine, that is, a more precise assessment of which medical intervention might best serve an individual patient on the basis of novel ...technology, such as molecular profiling, can have a significant impact on clinical outcomes. The field, however, is still new, and the demonstration of improved effectiveness compared with standard of care comes at a cost. How can we be sure that personalized medicine indeed provides a measurable clinical benefit, that we will be able to afford it, and that we can provide adequate access? The risk-benefit evaluation that accompanies each medical decision requires not only good clinical data but also an assessment of cost and infrastructure needed to provide access to technology. Several examples from the last decade illustrate which types of personalized medicines and diagnostic tests are easily being taken up in clinical practice and which types are more difficult to introduce. And as regulators and payers in the United States and elsewhere are taking on personalized medicine, an interesting convergence can be observed: better, more complete information for both approval and coverage decisions could be gained from a coordination of regulatory and reimbursement questions. Health economics and outcomes research (HEOR) emerges as an approach that can satisfy both needs. Although HEOR represents a well-established approach to demonstrate the effectiveness of interventions in many areas of medical practice, few HEOR studies exist in the field of personalized medicine today. It is reasonable to expect that this will change over the next few years.
Reproducibility is a fundamental requirement in scientific experiments. Some recent publications have claimed that microarrays are unreliable because lists of differentially expressed genes (DEGs) ...are not reproducible in similar experiments. Meanwhile, new statistical methods for identifying DEGs continue to appear in the scientific literature. The resultant variety of existing and emerging methods exacerbates confusion and continuing debate in the microarray community on the appropriate choice of methods for identifying reliable DEG lists.
Using the data sets generated by the MicroArray Quality Control (MAQC) project, we investigated the impact on the reproducibility of DEG lists of a few widely used gene selection procedures. We present comprehensive results from inter-site comparisons using the same microarray platform, cross-platform comparisons using multiple microarray platforms, and comparisons between microarray results and those from TaqMan - the widely regarded "standard" gene expression platform. Our results demonstrate that (1) previously reported discordance between DEG lists could simply result from ranking and selecting DEGs solely by statistical significance (P) derived from widely used simple t-tests; (2) when fold change (FC) is used as the ranking criterion with a non-stringent P-value cutoff filtering, the DEG lists become much more reproducible, especially when fewer genes are selected as differentially expressed, as is the case in most microarray studies; and (3) the instability of short DEG lists solely based on P-value ranking is an expected mathematical consequence of the high variability of the t-values; the more stringent the P-value threshold, the less reproducible the DEG list is. These observations are also consistent with results from extensive simulation calculations.
We recommend the use of FC-ranking plus a non-stringent P cutoff as a straightforward and baseline practice in order to generate more reproducible DEG lists. Specifically, the P-value cutoff should not be stringent (too small) and FC should be as large as possible. Our results provide practical guidance to choose the appropriate FC and P-value cutoffs when selecting a given number of DEGs. The FC criterion enhances reproducibility, whereas the P criterion balances sensitivity and specificity.
Study Objectives. To review the labels of United States Food and Drug Administration (FDA)‐approved drugs to identify those that contain pharmacogenomic biomarker information, and to collect ...prevalence information on the use of those drugs for which pharmacogenomic information is included in the drug labeling.
Design. Retrospective analysis.
Data Sources. The Physicians' Desk Reference Web site, Drugs@FDA Web site, and manufacturers' Web sites were used to identify drug labels containing pharmacogenomic information, and the prescription claims database of a large pharmacy benefits manager (insuring > 55 million individuals in the United States) was used to obtain drug utilization data.
Measurements and Main Results. Pharmacogenomic biomarkers were defined, FDA‐approved drug labels containing this information were identified, and utilization of these drugs was determined. Of 1200 drug labels reviewed for the years 1945–2005, 121 drug labels contained pharmacogenomic information based on a key word search and follow‐up screening. Of those, 69 labels referred to human genomic biomarkers, and 52 referred to microbial genomic biomarkers. Of the labels referring to human biomarkers, 43 (62%) pertained to polymorphisms in cytochrome P450 (CYP) enzyme metabolism, with CYP2D6 being most common. Of 36.1 million patients whose prescriptions were processed by a large pharmacy benefits manager in 2006, about 8.8 million (24.3%) received one or more drugs with human genomic biomarker information in the drug label.
Conclusion. Nearly one fourth of all outpatients received one or more drugs that have pharmacogenomic information in the label for that drug. The incorporation and appropriate use of pharmacogenomic information in drug labels should be tested for its ability to improve drug use and safety in the United States.
Abstract Biomarkers may be qualified using different qualification processes. A passive approach for qualification has been to accept the end of discussions in the scientific literature as an ...indication that a biomarker has been accepted. An active approach to qualification requires development of a comprehensive process by which a consensus may be reached about the qualification of a biomarker. Active strategies for qualification include those associated with context-independent as well as context-dependent qualifications.
The acceptance of microarray technology in regulatory decision-making is being challenged by the existence of various platforms and data analysis methods. A recent report (E. Marshall, Science, 306, ...630-631, 2004), by extensively citing the study of Tan et al. (Nucleic Acids Res., 31, 5676-5684, 2003), portrays a disturbingly negative picture of the cross-platform comparability, and, hence, the reliability of microarray technology.
We reanalyzed Tan's dataset and found that the intra-platform consistency was low, indicating a problem in experimental procedures from which the dataset was generated. Furthermore, by using three gene selection methods (i.e., p-value ranking, fold-change ranking, and Significance Analysis of Microarrays (SAM)) on the same dataset we found that p-value ranking (the method emphasized by Tan et al.) results in much lower cross-platform concordance compared to fold-change ranking or SAM. Therefore, the low cross-platform concordance reported in Tan's study appears to be mainly due to a combination of low intra-platform consistency and a poor choice of data analysis procedures, instead of inherent technical differences among different platforms, as suggested by Tan et al. and Marshall.
Our results illustrate the importance of establishing calibrated RNA samples and reference datasets to objectively assess the performance of different microarray platforms and the proficiency of individual laboratories as well as the merits of various data analysis procedures. Thus, we are progressively coordinating the MAQC project, a community-wide effort for microarray quality control.
Study Objective. To investigate the potential impact of proton pump inhibitors (PPIs) on the effectiveness of clopidogrel in preventing recurrent ischemic events after percutaneous coronary ...intervention (PCI) with stent placement.
Design. Population‐based, retrospective cohort study.
Data Source. National medical and pharmacy benefit claims database comprising approximately 19 million members.
Patients. A total of 16,690 patients who had undergone PCI with stent placement and who were highly adherent to clopidogrel therapy alone (9862 patients) or to clopidogrel with a PPI (6828 patients) between October 1, 2005, and September 30, 2006.
Measurements and Main Results. The primary end point was the occurrence of a major adverse cardiovascular event during the 12 months after stent placement. These events were defined as hospitalization for a cerebrovascular event (stroke or transient ischemic attack), an acute coronary syndrome (myocardial infarction or unstable angina), coronary revascularization (PCI or coronary artery bypass graft), or cardiovascular death. A composite event rate was compared between patients who received clopidogrel alone and those who received concomitant clopidogrel‐PPI therapy. Baseline differences in covariates were adjusted by using Cox proportional hazards models. In the 9862 patients receiving clopidogrel alone, 1766 (17.9%) experienced a major adverse cardiovascular event compared with 1710 patients (25.0%) who received concomitant clopidogrel‐PPI therapy (adjusted hazard ratio 1.51, 95% confidence interval 1.39–1.64, p<0.0001). Similar associations of increased risk were observed for each PPI studied (omeprazole, esomeprazole, pantoprazole, and lansoprazole).
Conclusion. Concomitant use of a PPI and clopidogrel compared with clopidogrel alone was associated with a higher rate of major adverse cardiovascular events within 1 year after coronary stent placement.
Teaching old dogs new tricks is difficult, but lessons learned from such efforts can be invaluable. Warfarin is an old drug, difficult to administer and a leading cause of drug-related mortality and ...hospitalizations. New genetic tests for optimizing warfarin therapy have not been adopted. The debate over precise clinical utility and cost-effectiveness of these tests misses more important points of building a better, cheaper, and more efficient infrastructure to measure the true real-world impact of personalized medicine. However, this same debate about how, when, and where such testing is appropriate has been invaluable to the field of personalized medicine: progress beyond science, in policy, regulations, and logistics can be highlighted along the path to safer and more efficacious warfarin therapy.
Randomized controlled trials are the gold standard for determining the efficacy of therapeutic interventions. However, medical practice has not evolved around the concept of randomized trials, but ...around the idea of careful observations, (anecdotal) case studies and the evaluation of retrospective data. Interventions discovered by these means and taken forward into clinical practice became standard practice as they continued to be superior when compared with prior or alternative types of treatment. Personalized medicine refers to an approach of clinical practice where a particular treatment is not chosen based on the 'average patient', but on characteristics of an individual patient, for example, a genetic profile that may vary from one patient to another, and therefore, allows to 'personalize' the treatment to a patient's individual needs. While the call for prospective randomized controlled trials to assess the effective use of such measurement may make sense in some cases, it is, when applied without distinction, hindering the implementation of personalized medicine. Important evidence for the validity and clinical effectiveness of using biomarkers, for example, a patient's genetic profile, can be gained from alternative approaches, including case-control and cohort studies, and retrospective analyses of data. Hence, we need to re-focus on approaches that are neither new nor unproven, but have been ignored over the last few decades.
The clinical utility of a molecular test rises proportional to a favorable regulatory risk/benefit assessment, and clinical utility is the driver of payer coverage decisions. Although a great deal ...has been written about clinical utility, debates still center on its 'definition.' We argue that the definition (an impact on clinical outcomes) is self-evident, and improved communications should focus on sequential steps in building and proving an adequate level of confidence for the diagnostic test's clinical value proposition. We propose a six-part framework to facilitate communications between test developers and health technology evaluators, relevant to both regulatory and payer decisions.