Although high-risk mutations in identified major susceptibility genes (DNA mismatch repair genes and
) account for some familial aggregation of colorectal cancer, their population prevalence and the ...causes of the remaining familial aggregation are not known.
We studied the families of 5,744 colorectal cancer cases (probands) recruited from population cancer registries in the United States, Canada, and Australia and screened probands for mutations in mismatch repair genes and
We conducted modified segregation analyses using the cancer history of first-degree relatives, conditional on the proband's age at diagnosis. We estimated the prevalence of mutations in the identified genes, the prevalence of HR for unidentified major gene mutations, and the variance of the residual polygenic component.
We estimated that 1 in 279 of the population carry mutations in mismatch repair genes (
= 1 in 1,946,
= 1 in 2,841,
= 1 in 758,
= 1 in 714), 1 in 45 carry mutations in
, and 1 in 504 carry mutations associated with an average 31-fold increased risk of colorectal cancer in unidentified major genes. The estimated polygenic variance was reduced by 30% to 50% after allowing for unidentified major genes and decreased from 3.3 for age <40 years to 0.5 for age ≥70 years (equivalent to sibling relative risks of 5.1 to 1.3, respectively).
Unidentified major genes might explain one third to one half of the missing heritability of colorectal cancer.
Our findings could aid gene discovery and development of better colorectal cancer risk prediction models.
.
Guidelines for initiating colorectal cancer (CRC) screening are based on family history but do not consider lifestyle, environmental, or genetic risk factors. We developed models to determine risk of ...CRC, based on lifestyle and environmental factors and genetic variants, and to identify an optimal age to begin screening.
We collected data from 9748 CRC cases and 10,590 controls in the Genetics and Epidemiology of Colorectal Cancer Consortium and the Colorectal Transdisciplinary study, from 1992 through 2005. Half of the participants were used to develop the risk determination model and the other half were used to evaluate the discriminatory accuracy (validation set). Models of CRC risk were created based on family history, 19 lifestyle and environmental factors (E-score), and 63 CRC-associated single-nucleotide polymorphisms identified in genome-wide association studies (G-score). We evaluated the discriminatory accuracy of the models by calculating area under the receiver operating characteristic curve values, adjusting for study, age, and endoscopy history for the validation set. We used the models to project the 10-year absolute risk of CRC for a given risk profile and recommend ages to begin screening in comparison to CRC risk for an average individual at 50 years of age, using external population incidence rates for non-Hispanic whites from the Surveillance, Epidemiology, and End Results program registry.
In our models, E-score and G-score each determined risk of CRC with greater accuracy than family history. A model that combined both scores and family history estimated CRC risk with an area under the receiver operating characteristic curve value of 0.63 (95% confidence interval, 0.62–0.64) for men and 0.62 (95% confidence interval, 0.61–0.63) for women; area under the receiver operating characteristic curve values based on only family history ranged from 0.53 to 0.54 and those based only E-score or G-score ranged from 0.59 to 0.60. Although screening is recommended to begin at age 50 years for individuals with no family history of CRC, starting ages calculated based on combined E-score and G-score differed by 12 years for men and 14 for women, for individuals with the highest vs the lowest 10% of risk.
We used data from 2 large international consortia to develop CRC risk calculation models that included genetic and environmental factors along with family history. These determine risk of CRC and starting ages for screening with greater accuracy than the family history only model, which is based on the current screening guideline. These scoring systems might serve as a first step toward developing individualized CRC prevention strategies.
Display omitted
The predictive accuracy of a survival model can be summarized using extensions of the proportion of variation explained by the model, or R2, commonly used for continuous response models, or using ...extensions of sensitivity and specificity, which are commonly used for binary response models. In this article we propose new time-dependent accuracy summaries based on time-specific versions of sensitivity and specificity calculated over risk sets. We connect the accuracy summaries to a previously proposed global concordance measure, which is a variant of Kendall's tau. In addition, we show how standard Cox regression output can be used to obtain estimates of time-dependent sensitivity and specificity, and time-dependent receiver operating characteristic (ROC) curves. Semiparametric estimation methods appropriate for both proportional and nonproportional hazards data are introduced, evaluated in simulations, and illustrated using two familiar survival data sets.
Novel biomarkers, in combination with currently available clinical information, have been sought to enhance clinical decision making in many branches of medicine, including screening, surveillance ...and prognosis. An individualized clinical decision rule (ICDR) is a decision rule that matches subgroups of patients with tailored medical regimen based on patient characteristics. We proposed new approaches to identify ICDRs by directly optimizing a risk-adjusted clinical benefit function that acknowledges the trade-off between detecting disease and over-treating patients with benign conditions. In particular, we developed a novel plug-in algorithm to optimize the risk-adjusted clinical benefit function, which leads to the construction of both nonparametric and linear parametric ICDRs. In addition, we proposed a novel approach based on the direct optimization of a smoothed ramp loss function to further enhance the robustness of a linear ICDR. We studied the asymptotic theories of the proposed estimators. Simulation results demonstrated good finite sample performance for the proposed estimators and improved clinical utilities when compared to standard approaches. The methods were applied to a prostate cancer biomarker study.
Few studies have examined associations between plasma choline metabolites and risk of colorectal cancer. Therefore, we investigated associations between plasma biomarkers of choline metabolism ...choline, betaine, dimethylglycine, and trimethylamine N-oxide (TMAO) and colorectal cancer risk among postmenopausal women in a case-control study nested within the Women's Health Initiative Observational Study. We selected 835 matched case-control pairs, and cases were further stratified by tumor site (proximal, distal, or rectal) and stage (local/regional or metastatic). Colorectal cancer was assessed by self-report and confirmed by medical records over the mean of 5.2 years of follow-up. Baseline plasma choline metabolites were measured by LC/MS-MS. In multivariable-adjusted conditional logistic regression models, plasma choline tended to be positively associated with rectal cancer risk OR (95% confidence interval, CI)(highest vs. lowest quartile) = 2.44 (0.93-6.40); P trend = 0.08, whereas plasma betaine was inversely associated with colorectal cancer overall 0.68 (0.47-0.99); P trend = 0.01 and with local/regional tumors 0.64 (0.42-0.99); P trend = 0.009. Notably, the plasma betaine:choline ratio was inversely associated with colorectal cancer overall 0.56 (0.39-0.82); P trend = 0.004 as well as with proximal 0.66 (0.41-1.06); P trend = 0.049, rectal 0.27 (0.10-0.78); P trend = 0.02, and local/regional 0.50 (0.33-0.76); P trend = 0.001 tumors. Finally, plasma TMAO, an oxidative derivative of choline produced by intestinal bacteria, was positively associated with rectal cancer 3.38 (1.25-9.16); P trend = 0.02 and with overall colorectal cancer risk among women with lower (vs. higher) plasma vitamin B12 levels (P interaction = 0.003). Collectively, these data suggest that alterations in choline metabolism, which may arise early in disease development, may be associated with higher risk of colorectal cancer. The positive association between plasma TMAO and colorectal cancer risk is consistent with an involvement of the gut microbiome in colorectal cancer pathogenesis.
Reducing colorectal cancer incidence and mortality through early detection would improve efficacy if targeted. We developed a colorectal cancer risk prediction model incorporating personal, family, ...genetic, and environmental risk factors to enhance prevention.
A familial risk profile (FRP) was calculated to summarize individuals' risk based on detailed cancer family history (FH), family structure, probabilities of mutation in major colorectal cancer susceptibility genes, and a polygenic component. We developed risk models, including individuals' FRP or binary colorectal cancer FH, and colorectal cancer risk factors collected at enrollment using population-based colorectal cancer cases (
= 4,445) and controls (
= 3,967) recruited by the Colon Cancer Family Registry Cohort (CCFRC). Model validation used CCFRC follow-up data for population-based (
= 12,052) and clinic-based (
= 5,584) relatives with no cancer history at recruitment to assess model calibration expected/observed rate ratio (E/O) and discrimination area under the receiver-operating-characteristic curve (AUC).
The E/O 95% confidence interval (CI) for FRP models for population-based relatives were 1.04 (0.74-1.45) for men and 0.86 (0.64-1.20) for women, and for clinic-based relatives were 1.15 (0.87-1.58) for men and 1.04 (0.76-1.45) for women. The age-adjusted AUCs (95% CI) for FRP models for population-based relatives were 0.69 (0.60-0.78) for men and 0.70 (0.62-0.77) for women, and for clinic-based relatives were 0.77 (0.69-0.84) for men and 0.68 (0.60-0.76) for women. The incremental values of AUC for FRP over FH models for population-based relatives were 0.08 (0.01-0.15) for men and 0.10 (0.04-0.16) for women, and for clinic-based relatives were 0.11 (0.05-0.17) for men and 0.11 (0.06-0.17) for women.
Both models calibrated well. The FRP-based model provided better risk stratification and risk discrimination than the FH-based model.
Our findings suggest detailed FH may be useful for targeted risk-based screening and clinical management.
Introduction Racial/ethnic disparities in colorectal cancer (CRC) screening and diagnostic testing present challenges to CRC prevention programs. Thus, it is important to understand how differences ...in CRC screening approaches between healthcare systems are associated with racial/ethnic disparities. Methods This was a retrospective cohort study of patients aged 50–75 years who were members of the Population-based Research Optimizing Screening Through Personalized Regimens cohort from 2010 to 2012. Data on race/ethnicity, CRC screening, and diagnostic testing came from medical records. Data collection occurred in 2014 and analysis in 2015. Logistic regression models were used to calculate AORs and 95% CIs comparing completion of CRC screening between racial/ethnic groups. Analyses were stratified by healthcare system to assess differences between systems. Results There were 1,746,714 participants across four healthcare systems. Compared with non-Hispanic whites (whites), odds of completing CRC screening were lower for non-Hispanic blacks (blacks) in healthcare systems with high screening rates (AOR=0.86, 95% CI=0.84, 0.88) but similar between blacks and whites in systems with lower screening rates (AOR=1.01, 95% CI=0.93, 1.09). Compared with whites, American Indian/Alaskan Natives had lower odds of completing CRC screening across all healthcare systems (AOR=0.76, 95% CI=0.72, 0.81). Hispanics had similar odds of CRC screening (AOR=0.99, 95% CI=0.98, 1.00) and Asian/Pacific Islanders had higher odds of CRC screening (AOR=1.16, 95% CI=1.15, 1.18) versus whites. Conclusions Racial/ethnic differences in CRC screening vary across healthcare systems, particularly for blacks, and may be more pronounced in systems with intensive CRC screening approaches.
There are two popular statistical approaches to biomarker evaluation. One models the risk of disease (or disease outcome) with, for example, logistic regression. A marker is considered useful if it ...has a strong effect on risk. The second evaluates classification performance by use of measures such as sensitivity, specificity, predictive values, and receiver operating characteristic curves. There is controversy about which approach is more appropriate. Moreover, the two approaches can give contradictory results on the same data. The authors present a new graphic, the predictiveness curve, which complements the risk modeling approach. It assesses the usefulness of a risk model when applied to the population. Although the predictiveness curve relates to classification performance measures, it also displays essential information about risk that is not displayed by the receiver operating characteristic curve. The authors propose that the predictiveness and classification performance of a marker, displayed together in an integrated plot, provide a comprehensive and cohesive assessment of a risk marker or model. The methods are demonstrated with data on prostate-specific antigen and risk factors from the Prostate Cancer Prevention Trial, 1993–2003.
Background & Aims Risk for colorectal cancer (CRC) can be greatly reduced through screening. To aid in the development of screening strategies, we refined models designed to determine risk of CRC by ...incorporating information from common genetic susceptibility loci. Methods By using data collected from more than 12,000 participants in 6 studies performed from 1990 through 2011 in the United States and Germany, we developed risk determination models based on sex, age, family history, genetic risk score (number of risk alleles carried at 27 validated common CRC susceptibility loci), and history of endoscopic examinations. The model was validated using data collected from approximately 1800 participants in the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial, conducted from 1993 through 2001 in the United States. Results We identified a CRC genetic risk score that independently predicted which patients in the training set would develop CRC. Compared with determination of risk based only on family history, adding the genetic risk score increased the discriminatory accuracy from 0.51 to 0.59 ( P = .0028) for men and from 0.52 to 0.56 ( P = .14) for women. We calculated age- and sex-specific 10-year CRC absolute risk estimates based on the number of risk alleles, family history, and history of endoscopic examinations. A model that included a genetic risk score better determined the recommended starting age for screening in subjects with and without family histories of CRC. The starting age for high-risk men (family history of CRC and genetic risk score, 90%) was 42 years, and for low-risk men (no family history of CRC and genetic risk score, 10%) was 52 years. For men with no family history and a high genetic risk score (90%), the starting age would be 47 years; this is an intermediate value that is 5 years earlier than it would be for men with a genetic risk score of 10%. Similar trends were observed in women. Conclusions By incorporating information on CRC risk alleles, we created a model to determine the risk for CRC more accurately. This model might be used to develop screening and prevention strategies.