Genome-wide expression profiling with DNA microarrays has and will provide a great deal of data to the plant scientific community. However, reliability concerns have required the development data ...quality tests for common systematic biases. Fortunately, most large-scale systematic biases are detectable and some are correctable by normalization. Technical replication experiments and statistical surveys indicate that these biases vary widely in severity and appearance. As a result, no single normalization or correction method currently available is able to address all the issues. However, careful sequence selection, array design, experimental design and experimental annotation can substantially improve the quality and biological of microarray data. In this review, we discuss these issues with reference to examples from the Arabidopsis Functional Genomics Consortium (AFGC) microarray project.
The ability to quantitatively compare protein levels across different regions of the brain to identify disease mechanisms remains a fundamental research challenge. It requires both a robust method to ...efficiently isolate proteins from small amounts of tissue and a differential technique that provides a sensitive and comprehensive analysis of these proteins. Here, we describe a proteomic approach for the quantitative mapping of membrane proteins between mouse fore- and hindbrain regions. The approach focuses primarily on a recently developed method for the fractionation of membranes and on-membrane protein digestion, but incorporates off-line SCX-fractionation of the peptide mixture and nano-LC−MS/MS analysis using an LTQ-FT-ICR instrument as part of the analytical method. Comparison of mass spectral peak intensities between samples, mapping of peaks to peptides and protein sequences, and statistical analysis were performed using in-house differential analysis software (DAS). In total, 1213 proteins were identified and 967 were quantified; 81% of the identified proteins were known membrane proteins and 38% of the protein sequences were predicted to contain transmembrane helices. Although this paper focuses primarily on characterizing the efficiency of this purification method from a typical sample set, for many of the quantified proteins such as glutamate receptors, GABA receptors, calcium channel subunits, and ATPases, the observed ratios of protein abundance were in good agreement with the known mRNA expression levels and/or intensities of immunostaining in rostral and caudal regions of murine brain. This suggests that the approach would be well-suited for incorporation in more rigorous, larger scale quantitative analysis designed to achieve biological significance. Keywords: brain • membrane proteins • neurotransmitter receptor • ion-channel • label-free proteomics • quantitative proteomics • Fourier transform mass spectrometry
This paper is a dialogue to explore the value and limitations of PA. Two skeptics acknowledge the utility of PA in organizing the scientific investigations that are necessary for confident siting and ...licensing of a repository; however, they maintain that the PA process, at least as it is currently implemented, is an essentially unscientific process with shortcomings that may provide results of limited use in evaluating actual effects on public health and safety. Conceptual uncertainties in a PA analysis can be so great that results can be confidently applied only over short time ranges, the antithesis of the purpose behind long-term, geologic disposal. Two proponents of PA agree that performance assessment is unscientific, but only in the sense that PA is an engineering analysis that uses existing scientific knowledge to support public policy decisions, rather than an investigation intended to increase fundamental knowledge of nature; PA has different goals and constraints than a typical scientific study. The proponents describe an ideal, six-step process for conducting generalized PA, here called probabilistic systems analysis (PSA); they note that virtually all scientific content of a PA is introduced during the model-building steps of a PSA; they content that a PA based on simple but scientifically acceptable mathematical models can provide useful and objective input to regulatory decision makers.
Liquid chromatography-mass spectrometry (LC-MS)-based proteomics is becoming an increasingly important tool in characterizing the abundance of proteins in biological samples of various types and ...across conditions. Effects of disease or drug treatments on protein abundance are of particular interest for the characterization of biological processes and the identification of biomarkers. Although state-of-the-art instrumentation is available to make high-quality measurements and commercially available software is available to process the data, the complexity of the technology and data presents challenges for bioinformaticians and statisticians. Here, we describe a pipeline for the analysis of quantitative LC-MS data. Key components of this pipeline include experimental design (sample pooling, blocking, and randomization) as well as deconvolution and alignment of mass chromatograms to generate a matrix of molecular abundance profiles. An important challenge in LC-MS-based quantitation is to be able to accurately identify and assign abundance measurements to members of protein families. To address this issue, we implement a novel statistical method for inferring the relative abundance of related members of protein families from tryptic peptide intensities. This pipeline has been used to analyze quantitative LC-MS data from multiple biomarker discovery projects. We illustrate our pipeline here with examples from two of these studies, and show that the pipeline constitutes a complete workable framework for LC-MS-based differential quantitation. Supplementary material is available at http://iec01.mie.utoronto.ca/~thodoros/Bukhman/.
Large numbers of expressed sequence tags (ESTs) have now been generated from a variety of model organisms.In plants, substantial collections of ESTs are available for Arabidopsis and rice, in each ...case representing significant proportions of the estimated total numbers of genes.Large-scale comparisons of Arabidopsis and rice sequences are especially interesting due to the fact that these two species are representatives of the two subclasses of the flowering plants (Dicotyledonae and Monocotyledonae, respectively).Here we present the results of systematic analysis of the Arabidopsis and rice EST sets.Non-redundant sets of sequences from Arabidopsis and rice were first separately derived and then combined so that gene families in common between the two species could be identified.Our results show that 58% of non-singleton ESTs are derived from genes in gene families common to the two species.These gene families constitute the basis of a core set of higher plant genes.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
Performance Assessment (PA) is the use of mathematical models to simulate the long‐term behavior of engineered and geologic barriers in a nuclear waste repository; methods of uncertainty analysis are ...used to assess effects of parametric and conceptual uncertainties associated with the model system upon the uncertainty in outcomes of the simulation. PA is required by the U.S. Environmental Protection Agency as part of its certification process for geologic repositories for nuclear waste. This paper is a dialogue to explore the value and limitations of PA. Two “skeptics” acknowledge the utility of PA in organizing the scientific investigations that are necessary for confident siting and licensing of a repository; however, they maintain that the PA process, at least as it is currently implemented, is an essentially unscientific process with shortcomings that may provide results of limited use in evaluating actual effects on public health and safety. Conceptual uncertainties in a PA analysis can be so great that results can be confidently applied only over short time ranges, the antithesis of the purpose behind long‐term, geologic disposal. Two “proponents” of PA agree that performance assessment is unscientific, but only in the sense that PA is an engineering analysis that uses existing scientific knowledge to support public policy decisions, rather than an investigation intended to increase fundamental knowledge of nature; PA has different goals and constraints than a typical scientific study. The “proponents” describe an ideal, sixstep process for conducting generalized PA, here called probabilistic systems analysis (PSA); they note that virtually all scientific content of a PA is introduced during the model‐building steps of a PSA, they contend that a PA based on simple but scientifically acceptable mathematical models can provide useful and objective input to regulatory decision makers. The value of the results of any PA must lie between these two views and will depend on the level of knowledge of the site, the degree to which models capture actual physical and chemical processes, the time over which extrapolations are made, and the proper evaluation of health risks attending implementation of the repository. The challenge is in evaluating whether the quality of the PA matches the needs of decision makers charged with protecting the health and safety of the public.