Abundance surveys are commonly used to estimate plant or animal densities and frequently require estimating detection probabilities to account for imperfect detection. The estimation of detection ...probabilities requires additional measurements that take time, potentially reducing the efficiency of the survey when applied to high-density populations. We conducted quadrat, removal, and distance surveys of zebra mussels (
) in three central Minnesota lakes and determined how much survey effort would be required to achieve a pre-specified level of precision for each abundance estimator, allowing us to directly compare survey design efficiencies across a range of conditions. We found that the required sampling effort needed to achieve our precision goal depended on both the survey design and population density. At low densities, survey designs that could cover large areas but with lower detection probabilities, such as distance surveys, were more efficient (
., required less sampling effort to achieve the same level of precision). However, at high densities, quadrat surveys, which tend to cover less area but with high detection rates, were more efficient. These results demonstrate that the best survey design is likely to be context-specific, requiring some prior knowledge of the underlying population density and the cost/time needed to collect additional information for estimating detection probabilities.
Logistic regression models-or "sightability models"-fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with ...population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
We develop a broad geochemical dataset from 50 samples of Pleistocene glacial till matrix (including three lacustrine samples) representing four sediment provenances collected from across Minnesota, ...USA. Such a dataset is useful both in the context of regional studies of glaciation, and in efforts to understand how provenance and glacial processes impact the geochemistry of sediment archives. The principal sediment sources of the four provenances include materials from the Archean-Proterozoic Canadian Shield, the Proterozoic Midcontinent Rift, Paleozoic carbonates, and Cenozoic and Mesozoic clastic sediments. We measured major element geochemistry in all till samples in both <2 mm and <63 μm size fractions, the trace element geochemistry in most samples, radiogenic isotopic compositions (Sr, Nd) in 13 representative samples, and Hf-isotope composition in 6 samples. Differences in source rock composition explain the primary variations in the geochemistry of our samples. In untreated (carbonate-bearing) samples, Na2O + K2O versus Fe2O3 or CaO are distinct in tills sourced from crystalline rocks versus sedimentary basins, and mechanical mixing from different source areas is evident on 100-km glacial transport length scales. Glacial materials originating from sedimentary rocks have higher chemical index of alteration (CIA) relative to materials sourced from the predominately igneous and metamorphic Canadian Shield. Increased values of Rb + Sr, Zr/Sc, and Cr + Ni are also associated with tills derived from crystalline rocks relative to tills derived from sedimentary rocks. Both the Hf- and Nd-isotopic composition of glacial sediments distinguish crystalline rock sources (less radiogenic) versus sedimentary rock sources (more radiogenic). The Sr-, and, to a lesser degree, the Hf-isotopic composition of lacustrine samples is influenced by subtle changes in sample mineral composition, reflecting both source rock variability and the sorting of clay and heavy mineral components during sediment transport. A carbonate-free composite of our till samples was found to be broadly representative of a Canadian Shield source. The ability to discern provenance and sediment transport process controls on glacial sediment geochemistry presents an opportunity to extend our understanding of past ice-sheet dynamics, and validates approaches that use tills as a proxy for continental crustal composition, provided that the influences of sediment recycling are carefully considered.
•Continental glacial till matrix geochemistry retain signatures of source rocks >400 km up ice.•Provenance exerts a primary influence on the chemical index of alteration (CIA) of tills.•Sample Zr/Sc and Hf-isotope composition identifies lacustrine deposition.•Glacial till Sr-isotope composition is influenced by source rock weathering.•Multiproxy geochemical analysis identifies tills generated from crystalline versus sedimentary rock sources.
Link‐tracing sampling designs can be used to study human populations that contain “hidden” groups who tend to be linked together by a common social trait. These links can be used to increase the ...sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197–205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link‐tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at‐risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied.
This paper compares methods for modeling the probability of removal when variable amounts of removal effort are present. A hierarchical modeling framework can produce estimates of animal abundance ...and detection from replicated removal counts taken at different locations in a region of interest. A common method of specifying variation in detection probabilities across locations or replicates is with a logistic model that incorporates relevant detection covariates. As an alternative to this logistic model, we propose using a catch-effort (CE) model to account for heterogeneity in detection when a measure of removal effort is available for each removal count. This method models the probability of detection as a nonlinear function of removal effort and a removal probability parameter that can vary spatially. Simulation results demonstrate that the CE model can effectively estimate abundance and removal probabilities when average removal rates are large but both the CE and logistic models tend to produce biased estimates as average removal rates decrease. We also found that the CE model fits better than logistic models when estimating wild turkey abundance using harvest and hunter counts collected by the Minnesota Department of Natural Resources during the spring turkey hunting season.
The objective of this 2-year, double-blind, placebo-controlled, randomized trial involving 48 participants was to determine if biweekly miconazole powder prevents onychomycosis recurrence. ...Intent-to-treat analysis found no significant differences in mycologic, clinical, or complete onychomycosis reinfection rates or time to reinfection. Limitations include small sample size and dosing regimen.
Summary
Before treating onychomycosis, it is important to exclude other conditions such as lichen planus and psoriasis. The purpose of this study was to evaluate physician preferences and uses of ...diagnostic tests for toenail onychomycosis (TO) by surveying dermatologists (D), podiatrists (P) and family practitioners (FP) in the United States. Surveys were mailed to approximately 1000 randomly sampled physicians from each of the three specialities. The questionnaire consisted of 15 items regarding physician and practice characteristics, number of patients with TO seen and treated, tests used to diagnose TO and reasons for using the tests. Results were analysed using several statistical methods. Response rates were low (D33.7%; P16.6%; FP28.4%). Ds and Ps (75.2%) and FPs (43.4%) reported feeling ‘very confident’ at diagnosing onychomycosis. KOH was the preferred diagnostic test for all three specialities. More Ds (75.4%) felt ‘very confident’ interpreting potassium hydroxide (KOH) exams than Ps (24.9%) and FPs (18.5%). Use of KOH exams was statistically associated with confidence interpreting exams (P P = 0.04092; D & FP P < 0.0001). Some FPs (46.6%) and Ps (21.6%) did not obtain a confirmatory diagnostic test prior to the treatment of onychomycosis while 63.6% of Ds ‘almost always/always’ did. While limited by low‐response rate, this study provides pilot information on the diagnostic preferences for TO by American D, P and FP.
Team-based learning (TBL) is a pedagogical strategy that uses groups of students working together in teams to learn course material. The main learning objective in TBL is to provide students the ...opportunity to practice course concepts during class-time. A key feature is multiple-choice quizzes that students take individually and then re-take as a team. TBL was originally conceived by Larry Michaelsen (University of Central Missouri) for his business classes and has proven to be especially effective in training medical students. In this paper, we describe an adaptation of TBL for an undergraduate statistical literacy course.