Valid and reliable instruments are needed to measure the multiple dimensions of perceived risk. The Perceived Risk of HIV Scale is an 8-item measure that assesses how people think and feel about ...their risk of infection. We set out to perform a cross-cultural adaptation of the scale to Brazilian Portuguese among key populations (gay, bisexual and other men who have sex with men and transgender/non-binary) and other populations (cisgender heterosexual men and cisgender women).
Methodological study with cross-sectional design conducted online during October/2019 (key populations sample 1 and other populations) and February-March/2020 (key populations not on pre-exposure prophylaxis sample 2). Cross-cultural adaptation of the Perceived Risk of HIV Scale followed Beaton et al. 2000 guidelines and included confirmatory factor analysis, differential item functioning (DIF) using the Multiple-Indicator Multiple-Cause model, and concurrent validity to verify if younger individuals, those ever testing for HIV, and engaging in high-risk behaviors had higher scores on the scale.
4342 participants from key populations (sample 1 = 235; sample 2 = 4107) and 155 participants from other populations completed the measure. We confirmed the single-factor structure of the original measure (fit indices for sample 1 plus other populations: CFI = 0.98, TLI = 0.98, RMSEA = 0.07; sample 2 plus other populations: CFI = 0.97, TLI = 0.95, RMSEA = 0.09). For the comparisons between key populations and other populations, three items (item 2: "I worry about getting infected with HIV", item 4: "I am sure I will not get infected with HIV", and item 8: "Getting HIV is something I have") exhibited statistically significant DIF. Items 2 and 8 were endorsed at higher levels by key populations and item 4 by other populations. However, the effect of DIF on overall scores was negligible (0.10 and 0.02 standard deviations for the models with other populations plus sample 1 and 2, respectively). Those ever testing for HIV scored higher than those who never tested (p < .001); among key populations, those engaging in high-risk behaviors scored higher than those reporting low-risk.
The Perceived Risk of HIV Scale can be used among key populations and other populations from Brazil.
Postpolypectomy colonoscopy surveillance aims to prevent colorectal cancer (CRC). The 2002 UK surveillance guidelines define low-risk, intermediate-risk and high-risk groups, recommending different ...strategies for each. Evidence supporting the guidelines is limited. We examined CRC incidence and effects of surveillance on incidence among each risk group.
Retrospective study of 33 011 patients who underwent colonoscopy with adenoma removal at 17 UK hospitals, mostly (87%) from 2000 to 2010. Patients were followed up through 2016. Cox regression with time-varying covariates was used to estimate effects of surveillance on CRC incidence adjusted for patient, procedural and polyp characteristics. Standardised incidence ratios (SIRs) compared incidence with that in the general population.
After exclusions, 28 972 patients were available for analysis; 14 401 (50%) were classed as low-risk, 11 852 (41%) as intermediate-risk and 2719 (9%) as high-risk. Median follow-up was 9.3 years. In the low-risk, intermediate-risk and high-risk groups, CRC incidence per 100 000 person-years was 140 (95% CI 122 to 162), 221 (195 to 251) and 366 (295 to 453), respectively. CRC incidence was 40%-50% lower with a single surveillance visit than with none: hazard ratios (HRs) were 0.56 (95% CI 0.39 to 0.80), 0.59 (0.43 to 0.81) and 0.49 (0.29 to 0.82) in the low-risk, intermediate-risk and high-risk groups, respectively. Compared with the general population, CRC incidence without surveillance was similar among low-risk (SIR 0.86, 95% CI 0.73 to 1.02) and intermediate-risk (1.16, 0.97 to 1.37) patients, but higher among high-risk patients (1.91, 1.39 to 2.56).
Postpolypectomy surveillance reduces CRC risk. However, even without surveillance, CRC risk in some low-risk and intermediate-risk patients is no higher than in the general population. These patients could be managed by screening rather than surveillance.
In recent years, the healthcare sector has adopted the use of operational risk assessment tools to help understand the systems issues that lead to patient safety incidents. But although these ...problem‐focused tools have improved the ability of healthcare organizations to identify hazards, they have not translated into measurable improvements in patient safety. One possible reason for this is a lack of support for the solution‐focused process of risk control. This article describes a content analysis of the risk management strategies, policies, and procedures at all acute (i.e., hospital), mental health, and ambulance trusts (health service organizations) in the East of England area of the British National Health Service. The primary goal was to determine what organizational‐level guidance exists to support risk control practice. A secondary goal was to examine the risk evaluation guidance provided by these trusts. With regard to risk control, we found an almost complete lack of useful guidance to promote good practice. With regard to risk evaluation, the trusts relied exclusively on risk matrices. A number of weaknesses were found in the use of this tool, especially related to the guidance for scoring an event's likelihood. We make a number of recommendations to address these concerns. The guidance assessed provides insufficient support for risk control and risk evaluation. This may present a significant barrier to the success of risk management approaches in improving patient safety.
The ecological and health problems resulted from heavy metals (Mn, Fe, Ni, Co, Cu, Zn, Cd, Hg, Pb, As, and Cr) in the road dust in the towns of Sekota and Lalibela, Ethiopia were assessed. The ...average heavy metal concentrations were ranged from 0.088 (Cd) to 2.714 (Fe) mg/kg. Individual metal and cumulative metals pollution levels in both towns revealed that Lalibela is moderately polluted by Zn, Pb, and Ni and Sekota being moderately polluted by Zn, Pb, Ni, As, Hg, and Cu. Furthermore, the United States Environmental Protection Agency's health risk evaluation model showed that the total heavy metal health risk levels in the road dust ranged from 5.71 × 10
–3
(adult) to 2.57 × 10
–2
(children), with an average risk of 7.35 × 10
–2
. Lalibela was found to have higher chance of risk than Sekota. The total lifetime cancer risk varied from 4.51 × 10
–9
(for adults, Sekota) to 7.75 × 10
–9
(for children, Lalibela), with a mean risk of 6.12 × 10
–9
implying a low chance of getting cancer. The hazard quotient and hazard index of all the metals were below the limit. In general, children were found to be more susceptible than adults.
Public trust in the authorities has been recognised in risk research as a crucial component of effective and efficient risk management. But in a pandemic, where the primary responsibility of risk ...management is not centralised within institutional actors but defused across society, trust can become a double-edged sword. Under these conditions, public trust based on a perception of government competence, care and openness may in fact lead people to underestimate risks and thus reduce their belief in the need to take individual action to control the risks. In this paper, we examine the interaction between trust in government, risk perceptions and public compliance in Singapore in the period between January and April 2020. Using social media tracking and online focus group discussions, we present a preliminary assessment of public responses to government risk communication and risk management measures. We highlight the unique deployment of risk communication in Singapore based on the narrative of 'defensive pessimism' to heighten rather than lower levels perceived risk. But the persistence of low public risk perceptions and concomitant low levels of compliance with government risk management measures bring to light the paradox of trust. This calls for further reflection on another dimension of trust which focuses on the role of the public; and further investigation into other social and cultural factors that may have stronger influence over individual belief in the need to take personal actions to control the risks.
•Rural people who inject drugs experience significant stigma in healthcare settings.•Healthcare stigma among rural PWID contributes to high-risk injection practices.•Expanding healthcare provision in ...rural SSPs may further reduce HIV risk among PWID.
The HIV epidemic is increasingly penetrating rural areas of the U.S. due to evolving epidemics of injection drug use. Many rural areas experience deficits in availability of HIV prevention, testing and harm reduction services, and confront significant stigma that inhibits care seeking. This paper examines enacted stigma in healthcare settings among rural people who inject drugs (PWID) and explores associations of stigma with continuing high-risk behaviors for HIV.
PWID participants (n = 324) were recruited into the study in three county health department syringe service programs (SSPs), as well as in local community-based organizations. Trained interviewers completed a standardized baseline interview lasting approximately 40 min. Bivariate logistic regression models examined the associations between enacted healthcare stigma, health conditions, and injection risk behaviors, and a mediation analysis was conducted.
Stigmatizing health conditions were common in this sample of PWID, and 201 (62.0 %) reported experiencing stigma from healthcare providers. Injection risk behaviors were uniformly associated with higher odds of enacted healthcare stigma, including sharing injection equipment at most recent injection (OR = 2.76; CI 1.55, 4.91), and lifetime receptive needle sharing (OR = 2.27; CI 1.42, 3.63). Enacted healthcare stigma partially mediated the relationship between having a stigmatizing health condition and engagement in high-risk injection behaviors.
Rural PWID are vulnerable to stigma in healthcare settings, which contributes to high-risk injection behaviors for HIV. These findings have critical public health implications, including the importance of tailored interventions to decrease enacted stigma in care settings, and structural changes to expand the provision of healthcare services within SSP settings.
Adverse childhood experiences (ACEs) have been associated with poor health status later in life. The objective of the present study was to examine the relationship between ACEs and health-related ...behaviors, chronic diseases, and mental health in adults.
A cross-sectional study was performed with 1501 residents of Macheng, China. The ACE International Questionnaire (ACE-IQ) was used to assess ACEs, including psychological, physical, and sexual forms of abuse, as well as household dysfunction. The main outcome variables were lifetime drinking status, lifetime smoking status, chronic diseases, depression, and posttraumatic stress disorder. Multiple logistic regression models were used to examine the associations between overall ACE score and individual ACE component scores and risk behaviors/comorbidities in adulthood after controlling for potential confounders.
A total of 66.2% of participants reported at least one ACE, and 5.93% reported four or more ACEs. Increased ACE scores were associated with increased risks of drinking (adjusted odds ratio AOR = 1.09, 95% confidence intervals CI: 1.00-1.09), chronic disease (AOR = 1.17, 95% CI: 1.06-1.28), depression (AOR = 1.37, 95% CI: 1.27-1.48), and posttraumatic stress disorder (AOR = 1.32, 95% CI: 1.23-1.42) in adulthood. After adjusting for confounding factors, the individual ACE components had different impacts on risk behavior and health, particularly on poor mental health outcomes in adulthood.
ACEs during childhood were significantly associated with risk behaviors and poor health outcomes in adulthood, and different ACE components had different long-term effects on health outcomes in adulthood.
To develop and validate a novel, machine learning-derived model to predict the risk of heart failure (HF) among patients with type 2 diabetes mellitus (T2DM).
Using data from 8,756 patients free at ...baseline of HF, with <10% missing data, and enrolled in the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial, we used random survival forest (RSF) methods, a nonparametric decision tree machine learning approach, to identify predictors of incident HF. The RSF model was externally validated in a cohort of individuals with T2DM using the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT).
Over a median follow-up of 4.9 years, 319 patients (3.6%) developed incident HF. The RSF models demonstrated better discrimination than the best performing Cox-based method (C-index 0.77 95% CI 0.75-0.80 vs. 0.73 0.70-0.76 respectively) and had acceptable calibration (Hosmer-Lemeshow statistic χ
= 9.63,
= 0.29) in the internal validation data set. From the identified predictors, an integer-based risk score for 5-year HF incidence was created: the WATCH-DM (Weight BMI, Age, hyperTension, Creatinine, HDL-C, Diabetes control fasting plasma glucose, QRS Duration, MI, and CABG) risk score. Each 1-unit increment in the risk score was associated with a 24% higher relative risk of HF within 5 years. The cumulative 5-year incidence of HF increased in a graded fashion from 1.1% in quintile 1 (WATCH-DM score ≤7) to 17.4% in quintile 5 (WATCH-DM score ≥14). In the external validation cohort, the RSF-based risk prediction model and the WATCH-DM risk score performed well with good discrimination (C-index = 0.74 and 0.70, respectively), acceptable calibration (
≥0.20 for both), and broad risk stratification (5-year HF risk range from 2.5 to 18.7% across quintiles 1-5).
We developed and validated a novel, machine learning-derived risk score that integrates readily available clinical, laboratory, and electrocardiographic variables to predict the risk of HF among outpatients with T2DM.
ABSTRACT
This article continues a recent trend of exploring the linkages between supply chain risk, resilience and decision‐making at an individual level. Specifically, the article reports the ...results of a behavioral study that explores whether and how the perceptions of supply chain risk and resilience influence the decisions made regarding the selection of a new source of critical components. The study uses a full factorial design for a scenario‐based role‐playing experiment involving over 1,000 valid responses drawn from multiple sampling pools including supply chain managers, students, and crowd‐sourced respondents. The results indicate that the perception of supply chain resilience—whether it is by systemic resilience communication, such as training or corporate pronouncements, or through personal exposure—significantly influences decision‐making, although personal exposure appears to have a stronger impact on the outcome. This relationship is significantly moderated by the risk propensity of the individual decision‐makers. The article concludes with a discussion of the results and their implication for both theory and practice. The discussion provides a framework for integrating the macro and micro levels by arguing that the micro issues can potentially moderate and mediate the relationships and findings observed at the macro level. Failure to integrate the micro effects can result in variance that the macro‐level analysis is unable to explain.
•We examine whether risk governance affects overall risk-taking and risk management effectiveness in conventional and Islamic banks.•Risk governance affects the risk perspectives across both types of ...banks for the post-crisis period.•Risk governance in Islamic banks relates to higher risk-taking for the pre-crisis period.•The board-level risk committees improve the effectiveness of risk management within conventional banks but do not influence Islamic banks.
This study aims to investigate (1) the effects of the creation of a board-level risk committee (RC) and the designation of a chief risk officer (CRO) on the risk-taking practices undertaken by financial institutions and (2) whether these mechanisms improve the risk management effectiveness of both conventional banks (CBs) and Islamic banks (IBs). We contribute to the scarce literature on the relationship between risk governance and risk-taking behaviour and investigate IBs in this context. Using a sample of 573 observations representing 65 banks (28 CBs and 37 IBs) in the Middle East and North Africa (MENA) region from 2005 to 2015, we find a negative association between the risk governance indices and their risk perspectives across both types of banks for the post-crisis period. Interestingly, we find that the existence of risk governance mechanisms in IBs is associated with higher risk taking for the pre-crisis period, i.e., before the recent amendments to the risk governance principles in the MENA region. This result implies that IBs can respond to regulatory reforms in the post-crisis period by curbing excessive risk taking. We offer further evidence that the risk governance effect on overall risk taking stems only from the stand-alone board-level RC and not from the role of the CRO. We note that the CBs’ performance is more associated with risk taking for banks with stronger board-level RCs. The board-level RCs improve the effectiveness of risk management within CBs but do not influence the risk management effectiveness of IBs.