UP - logo
E-resources
Peer reviewed Open access
  • How good is automated prote...
    Kozakov, Dima; Beglov, Dmitri; Bohnuud, Tanggis; Mottarella, Scott E.; Xia, Bing; Hall, David R.; Vajda, Sandor

    Proteins, December 2013, Volume: 81, Issue: 12
    Journal Article

    ABSTRACT The protein docking server ClusPro has been participating in critical assessment of prediction of interactions (CAPRI) since its introduction in 2004. This article evaluates the performance of ClusPro 2.0 for targets 46–58 in Rounds 22–27 of CAPRI. The analysis leads to a number of important observations. First, ClusPro reliably yields acceptable or medium accuracy models for targets of moderate difficulty that have also been successfully predicted by other groups, and fails only for targets that have few acceptable models submitted. Second, the quality of automated docking by ClusPro is very close to that of the best human predictor groups, including our own submissions. This is very important, because servers have to submit results within 48 h and the predictions should be reproducible, whereas human predictors have several weeks and can use any type of information. Third, while we refined the ClusPro results for manual submission by running computationally costly Monte Carlo minimization simulations, we observed significant improvement in accuracy only for two of the six complexes correctly predicted by ClusPro. Fourth, new developments, not seen in previous rounds of CAPRI, are that the top ranked model provided by ClusPro was acceptable or better quality for all these six targets, and that the top ranked model was also the highest quality for five of the six, confirming that ranking models based on cluster size can reliably identify the best near‐native conformations. Proteins 2013; 81:2159–2166. © 2013 Wiley Periodicals, Inc.