New estimates of measurement and sampling uncertainties of gridded in situ sea surface temperature anomalies are calculated for 1850 to 2006. The measurement uncertainties account for correlations ...between errors in observations made by the same ship or buoy due, for example, to miscalibration of the thermometer. Correlations between the errors increase the estimated uncertainties on grid box averages. In grid boxes where there are many observations from only a few ships or drifting buoys, this increase can be large. The correlations also increase uncertainties of regional, hemispheric, and global averages above and beyond the increase arising solely from the inflation of the grid box uncertainties. This is due to correlations in the errors between grid boxes visited by the same ship or drifting buoy. At times when reliable estimates can be made, the uncertainties in global average, Southern Hemisphere, and tropical sea surface temperature anomalies are between 2 and 3 times as large as when calculated assuming the errors are uncorrelated. Uncertainties of Northern Hemisphere averages are approximately double. A new estimate is also made of sampling uncertainties. They are largest in regions of high sea surface temperature variability such as the western boundary currents and along the northern boundary of the Southern Ocean. The sampling uncertainties are generally smaller in the tropics and in the ocean gyres.
Key Points
Errors in SST measurements are correlated
Errors in SST measurements have previously been underestimated
New error estimates in SST data have been made which account for these
A new flexible gridded dataset of sea surface temperature (SST) since 1850 is presented and its uncertainties are quantified. This analysis the Second Hadley Centre Sea Surface Temperature dataset ...(HadSST2) is based on data contained within the recently created International Comprehensive Ocean–Atmosphere Data Set (ICOADS) database and so is superior in geographical coverage to previous datasets and has smaller uncertainties. Issues arising when analyzing a database of observations measured from very different platforms and drawn from many different countries with different measurement practices are introduced. Improved bias corrections are applied to the data to account for changes in measurement conditions through time. A detailed analysis of uncertainties in these corrections is included by exploring assumptions made in their construction and producing multiple versions using a Monte Carlo method. An assessment of total uncertainty in each gridded average is obtained by combining these bias-correction-related uncertainties with those arising from measurement errors and undersampling of intragrid box variability. These are calculated by partitioning the variance in grid box averages between real and spurious variability. From month to month in individual grid boxes, sampling uncertainties tend to be most important (except in certain regions), but on large-scale averages bias-correction uncertainties are more dominant owing to their correlation between grid boxes. Changes in large-scale SST through time are assessed by two methods. The linear warming between 1850 and 2004 was 0.52° ± 0.19°C (95% confidence interval) for the globe, 0.59° ± 0.20°C for the Northern Hemisphere, and 0.46° ± 0.29°C for the Southern Hemisphere. Decadally filtered differences for these regions over this period were 0.67° ± 0.04°C, 0.71° ± 0.06°C, and 0.64° ± 0.07°C.
Celotno besedilo
Dostopno za:
BFBNIB, DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
One of the largest sources of uncertainty in estimates of global temperature change is that associated with the correction of systematic errors in sea surface temperature (SST) measurements. Despite ...recent work to quantify and reduce these errors throughout the historical record, differences between analyses remain larger than can be explained by the estimated uncertainties. We revisited the method used to estimate systematic errors and their uncertainties in version 3 of the Met Office Hadley Centre SST data set, HadSST. Using comparisons with oceanographic temperature profiles, we make estimates of biases associated with engine room measurements and insulated buckets and constrain the ranges of two of the more uncertain parameters in the bias estimation: the timing of the transition from uninsulated to insulated buckets in the middle twentieth century and the estimated fractions of different measurement methods used. Here, we present HadSST.4.0.0.0, based on release 3.0.0 and 3.0.1 of the International Comprehensive Ocean‐Atmosphere Data Set supplemented by drifting buoy measurements from the Copernicus Marine Environmental Monitoring Service. HadSST.4.0.0.0 comprises a 200‐member “ensemble” in which uncertain parameters in the SST bias scheme are varied to generate a range of adjustments. The evolution of global average SST in the new data set is similar to that in other SST data sets, and the difference between data sets is reduced during the middle twentieth century. However, the changes also highlight a discrepancy in the global‐average difference between adjusted SST and marine air temperature in the early 1990s and hence between HadSST.4.0.0.0 and, the National Oceanic and Atmospheric Administration SST data set, ERSSTv5.
Key Points
We describe the construction of HadSST.4.0.0.0, a climate data set of sea surface temperature change from 1850 to 2018
A range of bias adjustments was generated to create an ensemble of SST data sets with the ensemble spread partly constrained by oceanographic profile measurements
New estimates reduce discrepancy between data sets during the middle twentieth century and the recent slowdown in warming but highlight a divergence in the early 1990s
Changes in instrumentation and data availability have caused time‐varying biases in estimates of global and regional average sea surface temperature. The size of the biases arising from these changes ...are estimated and their uncertainties evaluated. The estimated biases and their associated uncertainties are largest during the period immediately following the Second World War, reflecting the rapid and incompletely documented changes in shipping and data availability at the time. Adjustments have been applied to reduce these effects in gridded data sets of sea surface temperature and the results are presented as a set of interchangeable realizations. Uncertainties of estimated trends in global and regional average sea surface temperature due to bias adjustments since the Second World War are found to be larger than uncertainties arising from the choice of analysis technique, indicating that this is an important source of uncertainty in analyses of historical sea surface temperatures. Despite this, trends over the twentieth century remain qualitatively consistent.
Key Points
There are biases in SST measurements throughout the record
These biases have been adjusted for, but large uncertainties remain
A new SST data set, HadSST3, is presented
We present a new version of the Met Office Hadley Centre/Climatic Research Unit global surface temperature data set, HadCRUT5. HadCRUT5 presents monthly average near‐surface temperature anomalies, ...relative to the 1961–1990 period, on a regular 5° latitude by 5° longitude grid from 1850 to 2018. HadCRUT5 is a combination of sea‐surface temperature (SST) measurements over the ocean from ships and buoys and near‐surface air temperature measurements from weather stations over the land surface. These data have been sourced from updated compilations and the adjustments applied to mitigate the impact of changes in SST measurement methods have been revised. Two variants of HadCRUT5 have been produced for use in different applications. The first represents temperature anomaly data on a grid for locations where measurement data are available. The second, more spatially complete, variant uses a Gaussian process based statistical method to make better use of the available observations, extending temperature anomaly estimates into regions for which the underlying measurements are informative. Each is provided as a 200‐member ensemble accompanied by additional uncertainty information. The combination of revised input data sets and statistical analysis results in greater warming of the global average over the course of the whole record. In recent years, increased warming results from an improved representation of Arctic warming and a better understanding of evolving biases in SST measurements from ships. These updates result in greater consistency with other independent global surface temperature data sets, despite their different approaches to data set construction, and further increase confidence in our understanding of changes seen.
Plain Language Summary
We have produced a new version of a data set that measures changes of near‐surface temperature across the globe from 1850 to 2018, called HadCRUT5. We have included an improved data set of sea‐surface temperature, which better accounts for the effects of changes through time in how measurement were made from ships and buoys at sea. We have also included an expanded compilation of measurements made at weather stations on land. There are two variations of HadCRUT5, produced for different uses. The first, the “HadCRUT5 noninfilled data set,” maps temperature changes on a grid for locations close to where we have measurements. The second, the “HadCRUT5 analysis,” extends our estimates to locations further from the available measurements using a statistical technique that makes use of the spatial connectedness of temperature patterns. This improves the representation of less well observed regions in estimates of global, hemispheric and regional temperature change. Together, these updates and improvements reveal a slightly greater rise in near‐surface temperature since the nineteenth century, especially in the Northern Hemisphere, which is more consistent with other data sets. This increases our confidence in our understanding of global surface temperature changes since the mid‐19th century.
Key Points
We have created a new version of the Met Office Hadley Centre and Climatic Research Unit global surface temperature data set for 1850–2018
The new data set better represents sparsely observed regions of the globe and incorporates an improved sea‐surface temperature data set
This data set shows increased global average warming since the mid‐19th century and in recent years, consistent with other analyses
Observational estimates of global ocean heat content (OHC) change are used to assess Earth's energy imbalance over the 20th Century. However, intercomparison studies show that the mapping methods ...used to interpolate sparse ocean temperature profile data are a key source of uncertainty. We present a new approach to assessing OHC mapping methods using 'synthetic profiles' generated from a state-of-the-art global climate model simulation. Synthetic profiles have the same sampling characteristics as the historical ocean temperature profile data but are based on model simulation data. Mapping methods ingest these data in the same way as they would real observations, but the resultant mapped fields can be compared to a model simulation 'truth'. We use this approach to assess two mapping methods that are used routinely for climate monitoring and initialisation of decadal forecasts. The introduction of the Argo network of autonomous profiling floats during the 2000s drives clear improvements in the ability of these methods to reconstruct the variability and spatial structure of OHC changes. At depths below 2000 m, both methods underestimate the magnitude of the simulated ocean warming signal. Temporal variability and trends in OHC are better captured in the better-observed northern hemisphere than in the southern hemisphere. At all depths, the sampling characteristics of the historical data introduces some spurious variability in the estimates of global OHC on sub-annual to multi-annual timescales. However, many of the large scale spatial anomalies, especially in the upper ocean, are successfully reconstructed even with sparse observations from the 1960s, demonstrating the potential to construct historical ocean analyses for assessing decadal predictions. The value of using accurate global covariances for data-poor periods is clearly seen. The results of this 'proof-of-concept' study are encouraging for gaining further insights into the capabilities and limitations of different mapping methods and for quantifying uncertainty in global OHC estimates.
Diabet. Med. 29, 604–608 (2012)
Aims Postprandial glucagon‐like peptide‐1 (GLP‐1) secretion and the ‘incretin effect’ have been reported to be deficient in Type 2 diabetes, but most studies have not ...controlled for variations in the rate of gastric emptying. We evaluated blood glucose, and plasma insulin, GLP‐1 and glucose‐dependent insulinotropic polypeptide (GIP) responses to intraduodenal glucose in Type 2 diabetes, and compared these with data from healthy controls.
Methods Eight males with well‐controlled Type 2 diabetes, managed by diet alone, were studied on four occasions in single‐blind, randomized order. Blood glucose, and plasma insulin, GLP‐1, and GIP were measured during 120‐min intraduodenal glucose infusions at 1 kcal/min (G1), 2 kcal/min (G2) and 4 kcal/min (G4) or saline control.
Results Type 2 patients had higher basal (P < 0.0005) and incremental (P < 0.0005) blood glucose responses to G2 and G4, when compared with healthy controls. In both groups, the stimulation of insulin and GLP‐1 by increasing glucose loads was not linear; responses to G1 and G2 were minimal, whereas responses to G4 were much greater (P < 0.005 for each) (incremental area under the GLP‐1 curve 224 ± 65, 756 ± 331 and 2807 ± 473 pmol/l.min, respectively, in Type 2 patients and 373 ± 231, 505 ± 161 and 1742 ± 456 pmol/l.min, respectively, in healthy controls). The GLP‐1 responses appeared comparable in the two groups. In both groups there was a load‐dependent increase in plasma GIP with no difference between them.
Conclusions In patients with well‐controlled Type 2 diabetes, blood glucose, insulin and GLP‐1 responses are critically dependent on the small intestinal glucose load, and GLP‐1 responses are not deficient.
Abstract
Seminal to the process of a health sciences curriculum evaluation is the periodic review of clinical assessment instruments that measure competency. An assessment of quality is facilitated ...by using a well-structured, authentic and reliable instrument. This process rests on designing and measuring the instrument against a sound framework and validating it for scientific merit. This paper documents the pedagogy and the process taken in developing an improved formative competency-based assessment instrument for the final year students of the Bachelor of Oral Health program (BOH) at the University of the Western Cape (UWC).
Methods: A qualitative research study design employing the Nominal Group Technique (NGT) was used as a method for gaining small group consensus on the clinical assessment instrument for exit level Oral Hygiene (BOH3) students within the parameters of assessment principles. The key contributors to the instrument development process were the academic staff of the Department of Oral Hygiene, involved in clinical teaching and assessment of student competency.
Results: The domains of ethics and professionalism, patient assessment, diagnosis, treatment planning and implementation was identified as the core elements in the assessment. The principles of assessment, which include, alignment with outcomes, feedback, transparency and validity, were used to guide the instrument development. The assessment criteria were cross examined for alignment to the learning outcomes of the module and the program whilst formative feedback was foregrounded as a central feature to support student learning and progress monitoring. Transparency was obtained by providing students access to the instrument before and after the assessment including the written feedback on their performance. The instrument embodied a range of criteria to be assessed rather than on the awarding of a cumulative score. This allowed for the identification of the criteria or domain within which a student is struggling or excelling. Consensus on the instrument design was achieved using the NGT phases throughout the instrument development process including the weighting of the domains and grading. This level of engagement together with the application of scientifically sound assessment principles contributed to the validation of the instrument.
Conclusion: The development of a competency-based assessment instrument was the result of a structured, collaborative and scientifically engaged process framed around specific assessment principles. The process culminated in the development of a formative competency-based clinical assessment instrument that was fit for purpose in the Bachelor of Oral Health program.
The Nominal Group Technique served to be a valuable approach for small group consensus in developing the instrument. It served to promote individual perspectives and to generate debate and group discussion between academics that were proficient in clinical teaching and, finally to facilitate group consensus on the instrument structure and system for administration.
A probability distribution for values of the effective climate sensitivity, with a lower bound of 1.6 K (5th percentile), is obtained on the basis of the increase in ocean heat content in recent ...decades from analyses of observed interior-ocean temperature changes, surface temperature changes measured since 1860, and estimates of anthropogenic and natural radiative forcing of the climate system. Radiative forcing is the greatest source of uncertainty in the calculation; the result also depends somewhat on the rate of ocean heat uptake in the late nineteenth century, for which an assumption is needed as there is no observational estimate. Because the method does not use the climate sensitivity simulated by a general circulation model, it provides an independent observationally based constraint on this important parameter of the climate system.
Celotno besedilo
Dostopno za:
BFBNIB, DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Human height is a classic, highly heritable quantitative trait. To begin to identify genetic variants influencing height, we examined genome-wide association data from 4,921 individuals. Common ...variants in the HMGA2 oncogene, exemplified by rs1042725, were associated with height (P = 4 × 10−8). HMGA2 is also a strong biological candidate for height, as rare, severe mutations in this gene alter body size in mice and humans, so we tested rs1042725 in additional samples. We confirmed the association in 19,064 adults from four further studies (P = 3 × 10−11, overall P = 4 × 10−16, including the genome-wide association data). We also observed the association in children (P = 1 × 10−6, N = 6,827) and a tall/short case-control study (P = 4 × 10−6, N = 3,207). We estimate that rs1042725 explains ∼0.3% of population variation in height (∼0.4 cm increased adult height per C allele). There are few examples of common genetic variants reproducibly associated with human quantitativetraits; these results represent, to our knowledge, the first consistently replicated association with adult and childhood height.
Celotno besedilo
Dostopno za:
DOBA, IJS, IZUM, KILJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK